Test Report: Docker_Linux_crio_arm64 21682

                    
                      7a7892355cfa060afe2cc9d2507b1d1308b66169:2025-10-02:41740
                    
                

Test fail (38/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.69
35 TestAddons/parallel/Registry 18.28
36 TestAddons/parallel/RegistryCreds 0.52
37 TestAddons/parallel/Ingress 144.49
38 TestAddons/parallel/InspektorGadget 5.49
39 TestAddons/parallel/MetricsServer 6.38
41 TestAddons/parallel/CSI 41.77
42 TestAddons/parallel/Headlamp 3.68
43 TestAddons/parallel/CloudSpanner 6.3
44 TestAddons/parallel/LocalPath 8.5
45 TestAddons/parallel/NvidiaDevicePlugin 5.29
46 TestAddons/parallel/Yakd 6.27
52 TestForceSystemdFlag 510.6
53 TestForceSystemdEnv 512.97
98 TestFunctional/parallel/ServiceCmdConnect 603.53
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.87
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
136 TestFunctional/parallel/ServiceCmd/Format 0.46
137 TestFunctional/parallel/ServiceCmd/URL 0.49
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.95
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
191 TestJSONOutput/pause/Command 1.86
197 TestJSONOutput/unpause/Command 1.9
281 TestPause/serial/Pause 7.62
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.33
305 TestStartStop/group/old-k8s-version/serial/Pause 6.91
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.43
317 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.98
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.32
327 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.79
329 TestStartStop/group/embed-certs/serial/Pause 6.51
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.43
344 TestStartStop/group/newest-cni/serial/Pause 6.03
348 TestStartStop/group/no-preload/serial/Pause 8.09
x
+
TestAddons/serial/Volcano (0.69s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable volcano --alsologtostderr -v=1: exit status 11 (691.969886ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:09:13.509632 1279104 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:09:13.510597 1279104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:13.510651 1279104 out.go:374] Setting ErrFile to fd 2...
	I1002 21:09:13.510671 1279104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:13.510975 1279104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:09:13.511333 1279104 mustload.go:65] Loading cluster: addons-806706
	I1002 21:09:13.511881 1279104 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:13.511929 1279104 addons.go:606] checking whether the cluster is paused
	I1002 21:09:13.512078 1279104 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:13.512117 1279104 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:09:13.512618 1279104 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:09:13.531018 1279104 ssh_runner.go:195] Run: systemctl --version
	I1002 21:09:13.531080 1279104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:09:13.548731 1279104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:09:13.645270 1279104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:09:13.645375 1279104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:09:13.679443 1279104 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:09:13.679465 1279104 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:09:13.679471 1279104 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:09:13.679475 1279104 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:09:13.679478 1279104 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:09:13.679481 1279104 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:09:13.679484 1279104 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:09:13.679489 1279104 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:09:13.679492 1279104 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:09:13.679498 1279104 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:09:13.679502 1279104 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:09:13.679505 1279104 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:09:13.679509 1279104 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:09:13.679512 1279104 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:09:13.679515 1279104 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:09:13.679520 1279104 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:09:13.679524 1279104 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:09:13.679527 1279104 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:09:13.679530 1279104 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:09:13.679533 1279104 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:09:13.679538 1279104 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:09:13.679541 1279104 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:09:13.679543 1279104 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:09:13.679546 1279104 cri.go:89] found id: ""
	I1002 21:09:13.679596 1279104 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:09:13.694923 1279104 out.go:203] 
	W1002 21:09:13.697880 1279104 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:09:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:09:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:09:13.697906 1279104 out.go:285] * 
	* 
	W1002 21:09:14.114416 1279104 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:09:14.117579 1279104 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.69s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.587438ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003512997s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00380478s
addons_test.go:392: (dbg) Run:  kubectl --context addons-806706 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-806706 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-806706 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.731296449s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 ip
2025/10/02 21:09:42 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable registry --alsologtostderr -v=1: exit status 11 (264.742821ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:09:42.519570 1279716 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:09:42.525157 1279716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:42.525183 1279716 out.go:374] Setting ErrFile to fd 2...
	I1002 21:09:42.525189 1279716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:42.525493 1279716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:09:42.525825 1279716 mustload.go:65] Loading cluster: addons-806706
	I1002 21:09:42.526264 1279716 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:42.526285 1279716 addons.go:606] checking whether the cluster is paused
	I1002 21:09:42.526395 1279716 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:42.526420 1279716 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:09:42.526945 1279716 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:09:42.545347 1279716 ssh_runner.go:195] Run: systemctl --version
	I1002 21:09:42.545419 1279716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:09:42.564728 1279716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:09:42.660591 1279716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:09:42.660678 1279716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:09:42.692090 1279716 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:09:42.692113 1279716 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:09:42.692119 1279716 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:09:42.692123 1279716 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:09:42.692126 1279716 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:09:42.692135 1279716 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:09:42.692139 1279716 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:09:42.692142 1279716 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:09:42.692146 1279716 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:09:42.692153 1279716 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:09:42.692161 1279716 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:09:42.692164 1279716 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:09:42.692167 1279716 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:09:42.692170 1279716 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:09:42.692174 1279716 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:09:42.692179 1279716 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:09:42.692189 1279716 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:09:42.692197 1279716 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:09:42.692200 1279716 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:09:42.692203 1279716 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:09:42.692209 1279716 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:09:42.692213 1279716 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:09:42.692216 1279716 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:09:42.692219 1279716 cri.go:89] found id: ""
	I1002 21:09:42.692280 1279716 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:09:42.707733 1279716 out.go:203] 
	W1002 21:09:42.710718 1279716 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:09:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:09:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:09:42.710746 1279716 out.go:285] * 
	* 
	W1002 21:09:42.719483 1279716 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:09:42.722703 1279716 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (18.28s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.572928ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-806706
addons_test.go:332: (dbg) Run:  kubectl --context addons-806706 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (278.948044ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:10:18.665263 1281520 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:10:18.666095 1281520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:18.666136 1281520 out.go:374] Setting ErrFile to fd 2...
	I1002 21:10:18.666157 1281520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:18.666573 1281520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:10:18.666954 1281520 mustload.go:65] Loading cluster: addons-806706
	I1002 21:10:18.667429 1281520 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:18.667472 1281520 addons.go:606] checking whether the cluster is paused
	I1002 21:10:18.667619 1281520 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:18.667656 1281520 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:10:18.668172 1281520 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:10:18.686156 1281520 ssh_runner.go:195] Run: systemctl --version
	I1002 21:10:18.686215 1281520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:10:18.705577 1281520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:10:18.800944 1281520 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:10:18.801030 1281520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:10:18.850167 1281520 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:10:18.850193 1281520 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:10:18.850198 1281520 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:10:18.850202 1281520 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:10:18.850205 1281520 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:10:18.850208 1281520 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:10:18.850212 1281520 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:10:18.850215 1281520 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:10:18.850218 1281520 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:10:18.850238 1281520 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:10:18.850241 1281520 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:10:18.850244 1281520 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:10:18.850247 1281520 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:10:18.850250 1281520 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:10:18.850253 1281520 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:10:18.850265 1281520 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:10:18.850268 1281520 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:10:18.850276 1281520 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:10:18.850280 1281520 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:10:18.850283 1281520 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:10:18.850287 1281520 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:10:18.850290 1281520 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:10:18.850293 1281520 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:10:18.850295 1281520 cri.go:89] found id: ""
	I1002 21:10:18.850351 1281520 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:10:18.867300 1281520 out.go:203] 
	W1002 21:10:18.870287 1281520 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:10:18.870315 1281520 out.go:285] * 
	* 
	W1002 21:10:18.879310 1281520 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:10:18.883783 1281520 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-806706 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-806706 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-806706 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0ecb6875-ccfb-4251-a7b9-2ed8c63db2d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0ecb6875-ccfb-4251-a7b9-2ed8c63db2d0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00343296s
I1002 21:10:22.125728 1272514 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.650686875s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-806706 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-806706
helpers_test.go:243: (dbg) docker inspect addons-806706:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326",
	        "Created": "2025-10-02T21:06:39.319392408Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1273669,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:06:39.396538887Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/hostname",
	        "HostsPath": "/var/lib/docker/containers/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/hosts",
	        "LogPath": "/var/lib/docker/containers/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326-json.log",
	        "Name": "/addons-806706",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-806706:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-806706",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326",
	                "LowerDir": "/var/lib/docker/overlay2/0f5ce44aae1a1496ae218ad28067c33dd57e32dc5987ef6130698f22500d4e79-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f5ce44aae1a1496ae218ad28067c33dd57e32dc5987ef6130698f22500d4e79/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f5ce44aae1a1496ae218ad28067c33dd57e32dc5987ef6130698f22500d4e79/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f5ce44aae1a1496ae218ad28067c33dd57e32dc5987ef6130698f22500d4e79/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-806706",
	                "Source": "/var/lib/docker/volumes/addons-806706/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-806706",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-806706",
	                "name.minikube.sigs.k8s.io": "addons-806706",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d72fa3be9f92cc6781c93044512038d9c9312512a7165ebfb0e6bfd1c1cf2449",
	            "SandboxKey": "/var/run/docker/netns/d72fa3be9f92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34271"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34272"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34275"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34273"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34274"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-806706": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:17:88:4d:37:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9d7cb88a1b7da1c76acaec51b35fa75e6ad9973eeb74a743230e10d2aa77d173",
	                    "EndpointID": "1b864a8d2c7536528d3e52fde8072c5e0edbbcc0af0bda67bbb1694b1300cc8e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-806706",
	                        "9be5d6290945"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-806706 -n addons-806706
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-806706 logs -n 25: (1.50597152s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-121503                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-121503 │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ start   │ --download-only -p binary-mirror-339125 --alsologtostderr --binary-mirror http://127.0.0.1:33819 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-339125   │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │                     │
	│ delete  │ -p binary-mirror-339125                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-339125   │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ addons  │ enable dashboard -p addons-806706                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │                     │
	│ addons  │ disable dashboard -p addons-806706                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │                     │
	│ start   │ -p addons-806706 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:09 UTC │
	│ addons  │ addons-806706 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ addons  │ addons-806706 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ addons  │ addons-806706 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ ip      │ addons-806706 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │ 02 Oct 25 21:09 UTC │
	│ addons  │ addons-806706 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ addons  │ addons-806706 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ ssh     │ addons-806706 ssh cat /opt/local-path-provisioner/pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │ 02 Oct 25 21:09 UTC │
	│ addons  │ addons-806706 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ addons  │ addons-806706 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	│ addons  │ enable headlamp -p addons-806706 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	│ addons  │ addons-806706 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	│ addons  │ addons-806706 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	│ addons  │ addons-806706 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	│ addons  │ addons-806706 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	│ addons  │ addons-806706 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-806706                                                                                                                                                                                                                                                                                                                                                                                           │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │ 02 Oct 25 21:10 UTC │
	│ addons  │ addons-806706 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	│ ssh     │ addons-806706 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	│ ip      │ addons-806706 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:12 UTC │ 02 Oct 25 21:12 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:06:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:06:13.121339 1273271 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:06:13.121464 1273271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:06:13.121476 1273271 out.go:374] Setting ErrFile to fd 2...
	I1002 21:06:13.121482 1273271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:06:13.121746 1273271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:06:13.122246 1273271 out.go:368] Setting JSON to false
	I1002 21:06:13.123268 1273271 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20899,"bootTime":1759418275,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 21:06:13.123344 1273271 start.go:140] virtualization:  
	I1002 21:06:13.126799 1273271 out.go:179] * [addons-806706] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:06:13.130740 1273271 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:06:13.130875 1273271 notify.go:220] Checking for updates...
	I1002 21:06:13.136704 1273271 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:06:13.139666 1273271 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 21:06:13.142531 1273271 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 21:06:13.145295 1273271 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:06:13.148362 1273271 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:06:13.151421 1273271 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:06:13.186745 1273271 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:06:13.186952 1273271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:06:13.246945 1273271 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 21:06:13.238002623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:06:13.247060 1273271 docker.go:318] overlay module found
	I1002 21:06:13.250080 1273271 out.go:179] * Using the docker driver based on user configuration
	I1002 21:06:13.252970 1273271 start.go:304] selected driver: docker
	I1002 21:06:13.252988 1273271 start.go:924] validating driver "docker" against <nil>
	I1002 21:06:13.253003 1273271 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:06:13.253740 1273271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:06:13.305284 1273271 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 21:06:13.295940664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:06:13.305442 1273271 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:06:13.305691 1273271 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:06:13.308441 1273271 out.go:179] * Using Docker driver with root privileges
	I1002 21:06:13.311217 1273271 cni.go:84] Creating CNI manager for ""
	I1002 21:06:13.311287 1273271 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:06:13.311300 1273271 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:06:13.311373 1273271 start.go:348] cluster config:
	{Name:addons-806706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-806706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 21:06:13.314316 1273271 out.go:179] * Starting "addons-806706" primary control-plane node in "addons-806706" cluster
	I1002 21:06:13.317190 1273271 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:06:13.320049 1273271 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:06:13.322838 1273271 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:06:13.322898 1273271 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:06:13.322912 1273271 cache.go:58] Caching tarball of preloaded images
	I1002 21:06:13.322934 1273271 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:06:13.322998 1273271 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:06:13.323008 1273271 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:06:13.323391 1273271 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/config.json ...
	I1002 21:06:13.323423 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/config.json: {Name:mkb1cb32b6df00b640649c3c3bbb07793752531e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:13.338513 1273271 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 21:06:13.338655 1273271 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 21:06:13.338679 1273271 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 21:06:13.338688 1273271 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 21:06:13.338697 1273271 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 21:06:13.338703 1273271 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 21:06:31.767536 1273271 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 21:06:31.767586 1273271 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:06:31.767616 1273271 start.go:360] acquireMachinesLock for addons-806706: {Name:mka9cc2a7600d2ba078caf421120722db5c4e46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:06:31.768340 1273271 start.go:364] duration metric: took 696.085µs to acquireMachinesLock for "addons-806706"
	I1002 21:06:31.768381 1273271 start.go:93] Provisioning new machine with config: &{Name:addons-806706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-806706 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:06:31.768468 1273271 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:06:31.771911 1273271 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 21:06:31.772179 1273271 start.go:159] libmachine.API.Create for "addons-806706" (driver="docker")
	I1002 21:06:31.772238 1273271 client.go:168] LocalClient.Create starting
	I1002 21:06:31.772360 1273271 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem
	I1002 21:06:32.096500 1273271 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem
	I1002 21:06:32.529400 1273271 cli_runner.go:164] Run: docker network inspect addons-806706 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:06:32.548362 1273271 cli_runner.go:211] docker network inspect addons-806706 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:06:32.548472 1273271 network_create.go:284] running [docker network inspect addons-806706] to gather additional debugging logs...
	I1002 21:06:32.548493 1273271 cli_runner.go:164] Run: docker network inspect addons-806706
	W1002 21:06:32.566729 1273271 cli_runner.go:211] docker network inspect addons-806706 returned with exit code 1
	I1002 21:06:32.566776 1273271 network_create.go:287] error running [docker network inspect addons-806706]: docker network inspect addons-806706: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-806706 not found
	I1002 21:06:32.566791 1273271 network_create.go:289] output of [docker network inspect addons-806706]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-806706 not found
	
	** /stderr **
	I1002 21:06:32.566905 1273271 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:06:32.582890 1273271 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b9b450}
	I1002 21:06:32.582938 1273271 network_create.go:124] attempt to create docker network addons-806706 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:06:32.582998 1273271 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-806706 addons-806706
	I1002 21:06:32.642620 1273271 network_create.go:108] docker network addons-806706 192.168.49.0/24 created
	I1002 21:06:32.642657 1273271 kic.go:121] calculated static IP "192.168.49.2" for the "addons-806706" container
	I1002 21:06:32.642743 1273271 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:06:32.662706 1273271 cli_runner.go:164] Run: docker volume create addons-806706 --label name.minikube.sigs.k8s.io=addons-806706 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:06:32.680580 1273271 oci.go:103] Successfully created a docker volume addons-806706
	I1002 21:06:32.680678 1273271 cli_runner.go:164] Run: docker run --rm --name addons-806706-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-806706 --entrypoint /usr/bin/test -v addons-806706:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:06:34.846376 1273271 cli_runner.go:217] Completed: docker run --rm --name addons-806706-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-806706 --entrypoint /usr/bin/test -v addons-806706:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.165658628s)
	I1002 21:06:34.846407 1273271 oci.go:107] Successfully prepared a docker volume addons-806706
	I1002 21:06:34.846434 1273271 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:06:34.846452 1273271 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:06:34.846541 1273271 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-806706:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:06:39.247387 1273271 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-806706:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.400792798s)
	I1002 21:06:39.247424 1273271 kic.go:203] duration metric: took 4.40096744s to extract preloaded images to volume ...
	W1002 21:06:39.247576 1273271 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:06:39.247694 1273271 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:06:39.305141 1273271 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-806706 --name addons-806706 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-806706 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-806706 --network addons-806706 --ip 192.168.49.2 --volume addons-806706:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:06:39.606899 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Running}}
	I1002 21:06:39.629340 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:06:39.659649 1273271 cli_runner.go:164] Run: docker exec addons-806706 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:06:39.734622 1273271 oci.go:144] the created container "addons-806706" has a running status.
	I1002 21:06:39.734654 1273271 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa...
	I1002 21:06:40.011473 1273271 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:06:40.064091 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:06:40.088102 1273271 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:06:40.088128 1273271 kic_runner.go:114] Args: [docker exec --privileged addons-806706 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:06:40.157846 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:06:40.179389 1273271 machine.go:93] provisionDockerMachine start ...
	I1002 21:06:40.179511 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:40.200361 1273271 main.go:141] libmachine: Using SSH client type: native
	I1002 21:06:40.200704 1273271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34271 <nil> <nil>}
	I1002 21:06:40.200721 1273271 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:06:40.201419 1273271 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:06:43.335686 1273271 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-806706
	
	I1002 21:06:43.336453 1273271 ubuntu.go:182] provisioning hostname "addons-806706"
	I1002 21:06:43.336535 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:43.353867 1273271 main.go:141] libmachine: Using SSH client type: native
	I1002 21:06:43.354194 1273271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34271 <nil> <nil>}
	I1002 21:06:43.354206 1273271 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-806706 && echo "addons-806706" | sudo tee /etc/hostname
	I1002 21:06:43.496201 1273271 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-806706
	
	I1002 21:06:43.496285 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:43.514653 1273271 main.go:141] libmachine: Using SSH client type: native
	I1002 21:06:43.514974 1273271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34271 <nil> <nil>}
	I1002 21:06:43.514995 1273271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-806706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-806706/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-806706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:06:43.646703 1273271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:06:43.646731 1273271 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 21:06:43.646757 1273271 ubuntu.go:190] setting up certificates
	I1002 21:06:43.646766 1273271 provision.go:84] configureAuth start
	I1002 21:06:43.646842 1273271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-806706
	I1002 21:06:43.663150 1273271 provision.go:143] copyHostCerts
	I1002 21:06:43.663234 1273271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 21:06:43.663376 1273271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 21:06:43.663447 1273271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 21:06:43.663507 1273271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.addons-806706 san=[127.0.0.1 192.168.49.2 addons-806706 localhost minikube]
	I1002 21:06:44.759554 1273271 provision.go:177] copyRemoteCerts
	I1002 21:06:44.759624 1273271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:06:44.759675 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:44.775869 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:06:44.873459 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:06:44.890817 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 21:06:44.907788 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:06:44.924703 1273271 provision.go:87] duration metric: took 1.277909883s to configureAuth
	I1002 21:06:44.924732 1273271 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:06:44.924940 1273271 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:06:44.925047 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:44.941686 1273271 main.go:141] libmachine: Using SSH client type: native
	I1002 21:06:44.942007 1273271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34271 <nil> <nil>}
	I1002 21:06:44.942056 1273271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:06:45.330150 1273271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:06:45.330244 1273271 machine.go:96] duration metric: took 5.150826591s to provisionDockerMachine
	I1002 21:06:45.330283 1273271 client.go:171] duration metric: took 13.558032024s to LocalClient.Create
	I1002 21:06:45.330346 1273271 start.go:167] duration metric: took 13.558165618s to libmachine.API.Create "addons-806706"
	I1002 21:06:45.330382 1273271 start.go:293] postStartSetup for "addons-806706" (driver="docker")
	I1002 21:06:45.330435 1273271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:06:45.330568 1273271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:06:45.330684 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:45.387898 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:06:45.494809 1273271 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:06:45.498593 1273271 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:06:45.498620 1273271 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:06:45.498631 1273271 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 21:06:45.498702 1273271 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 21:06:45.498723 1273271 start.go:296] duration metric: took 168.30807ms for postStartSetup
	I1002 21:06:45.499062 1273271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-806706
	I1002 21:06:45.516293 1273271 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/config.json ...
	I1002 21:06:45.516603 1273271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:06:45.516646 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:45.533460 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:06:45.627370 1273271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:06:45.633584 1273271 start.go:128] duration metric: took 13.865099892s to createHost
	I1002 21:06:45.633610 1273271 start.go:83] releasing machines lock for "addons-806706", held for 13.86525257s
	I1002 21:06:45.633707 1273271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-806706
	I1002 21:06:45.650269 1273271 ssh_runner.go:195] Run: cat /version.json
	I1002 21:06:45.650330 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:45.650573 1273271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:06:45.650635 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:45.674253 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:06:45.677548 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:06:45.773595 1273271 ssh_runner.go:195] Run: systemctl --version
	I1002 21:06:45.867753 1273271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:06:45.904646 1273271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:06:45.909027 1273271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:06:45.909102 1273271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:06:45.937107 1273271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:06:45.937129 1273271 start.go:495] detecting cgroup driver to use...
	I1002 21:06:45.937162 1273271 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:06:45.937218 1273271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:06:45.955186 1273271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:06:45.967891 1273271 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:06:45.967992 1273271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:06:45.986053 1273271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:06:46.003855 1273271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:06:46.128412 1273271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:06:46.252519 1273271 docker.go:234] disabling docker service ...
	I1002 21:06:46.252585 1273271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:06:46.273862 1273271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:06:46.286919 1273271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:06:46.395406 1273271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:06:46.516063 1273271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:06:46.529201 1273271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:06:46.543438 1273271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:06:46.543519 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.553027 1273271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:06:46.553107 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.562277 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.571296 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.580279 1273271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:06:46.588197 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.597248 1273271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.610873 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.619919 1273271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:06:46.627580 1273271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:06:46.635174 1273271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:06:46.754135 1273271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:06:46.876077 1273271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:06:46.876242 1273271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:06:46.880227 1273271 start.go:563] Will wait 60s for crictl version
	I1002 21:06:46.880345 1273271 ssh_runner.go:195] Run: which crictl
	I1002 21:06:46.883879 1273271 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:06:46.907117 1273271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:06:46.907307 1273271 ssh_runner.go:195] Run: crio --version
	I1002 21:06:46.935315 1273271 ssh_runner.go:195] Run: crio --version
	I1002 21:06:46.969733 1273271 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:06:46.972578 1273271 cli_runner.go:164] Run: docker network inspect addons-806706 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:06:46.988815 1273271 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:06:46.992722 1273271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:06:47.003351 1273271 kubeadm.go:883] updating cluster {Name:addons-806706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-806706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:06:47.003508 1273271 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:06:47.003579 1273271 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:06:47.038873 1273271 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:06:47.038895 1273271 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:06:47.038950 1273271 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:06:47.067946 1273271 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:06:47.067969 1273271 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:06:47.067977 1273271 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:06:47.068061 1273271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-806706 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-806706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:06:47.068147 1273271 ssh_runner.go:195] Run: crio config
	I1002 21:06:47.119332 1273271 cni.go:84] Creating CNI manager for ""
	I1002 21:06:47.119355 1273271 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:06:47.119373 1273271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:06:47.119414 1273271 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-806706 NodeName:addons-806706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:06:47.119563 1273271 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-806706"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:06:47.119636 1273271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:06:47.127243 1273271 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:06:47.127312 1273271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:06:47.134887 1273271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 21:06:47.149355 1273271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:06:47.162564 1273271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1002 21:06:47.175947 1273271 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:06:47.179581 1273271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:06:47.189152 1273271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:06:47.294051 1273271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:06:47.310333 1273271 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706 for IP: 192.168.49.2
	I1002 21:06:47.310356 1273271 certs.go:195] generating shared ca certs ...
	I1002 21:06:47.310372 1273271 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:47.310500 1273271 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 21:06:47.717769 1273271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt ...
	I1002 21:06:47.717802 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt: {Name:mkad6e6e4490a5c9a5702e976ad0453b70d21cd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:47.718696 1273271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key ...
	I1002 21:06:47.718717 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key: {Name:mka5069782b2362307f91a95829433ee76cf98fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:47.718873 1273271 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 21:06:48.499587 1273271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt ...
	I1002 21:06:48.499622 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt: {Name:mk98f973c0fc8519a3c830311c50c56d34441e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:48.499817 1273271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key ...
	I1002 21:06:48.499831 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key: {Name:mkd3b7a0d48b4fcac6b64ca401b68027872a7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:48.499920 1273271 certs.go:257] generating profile certs ...
	I1002 21:06:48.499979 1273271 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.key
	I1002 21:06:48.499997 1273271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt with IP's: []
	I1002 21:06:48.909085 1273271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt ...
	I1002 21:06:48.909118 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: {Name:mk8d33efa59629ba32bda29012092ba282d54569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:48.909927 1273271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.key ...
	I1002 21:06:48.909946 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.key: {Name:mk11a1e76e0bf6428500af10fb13698297295501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:48.910624 1273271 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key.f25a6f51
	I1002 21:06:48.910652 1273271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt.f25a6f51 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 21:06:49.184689 1273271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt.f25a6f51 ...
	I1002 21:06:49.184718 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt.f25a6f51: {Name:mkd31106a0539e19ca9a0e5be8892b59bdfc64d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:49.185523 1273271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key.f25a6f51 ...
	I1002 21:06:49.185548 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key.f25a6f51: {Name:mka13dee7cbea5738848e70c38f0c188824e9341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:49.186302 1273271 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt.f25a6f51 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt
	I1002 21:06:49.186395 1273271 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key.f25a6f51 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key
	I1002 21:06:49.186452 1273271 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.key
	I1002 21:06:49.186482 1273271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.crt with IP's: []
	I1002 21:06:49.566788 1273271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.crt ...
	I1002 21:06:49.566825 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.crt: {Name:mkb4b3cf0fa7cc3ad96dcfb6c9caa7554ad2a76c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:49.567020 1273271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.key ...
	I1002 21:06:49.567035 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.key: {Name:mk049779da010e6f379f975bacb7429ea8771c77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:49.567880 1273271 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:06:49.567937 1273271 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:06:49.567966 1273271 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:06:49.567992 1273271 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 21:06:49.568563 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:06:49.587519 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:06:49.605597 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:06:49.623508 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:06:49.641194 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 21:06:49.659711 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:06:49.677805 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:06:49.702851 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:06:49.721926 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:06:49.739661 1273271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:06:49.752904 1273271 ssh_runner.go:195] Run: openssl version
	I1002 21:06:49.759200 1273271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:06:49.767563 1273271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:06:49.771451 1273271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:06:49.771546 1273271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:06:49.813667 1273271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:06:49.822204 1273271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:06:49.825919 1273271 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:06:49.825970 1273271 kubeadm.go:400] StartCluster: {Name:addons-806706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-806706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:06:49.826061 1273271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:06:49.826121 1273271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:06:49.856681 1273271 cri.go:89] found id: ""
	I1002 21:06:49.856820 1273271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:06:49.864713 1273271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:06:49.872570 1273271 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:06:49.872634 1273271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:06:49.880463 1273271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:06:49.880481 1273271 kubeadm.go:157] found existing configuration files:
	
	I1002 21:06:49.880532 1273271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:06:49.887967 1273271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:06:49.888057 1273271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:06:49.895164 1273271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:06:49.902843 1273271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:06:49.902963 1273271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:06:49.910192 1273271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:06:49.917620 1273271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:06:49.917687 1273271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:06:49.925026 1273271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:06:49.932616 1273271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:06:49.932702 1273271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:06:49.940014 1273271 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:06:49.979453 1273271 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:06:49.979746 1273271 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:06:50.004532 1273271 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:06:50.004616 1273271 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:06:50.004660 1273271 kubeadm.go:318] OS: Linux
	I1002 21:06:50.004711 1273271 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:06:50.004766 1273271 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:06:50.004821 1273271 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:06:50.004877 1273271 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:06:50.004956 1273271 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:06:50.005011 1273271 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:06:50.005065 1273271 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:06:50.005120 1273271 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:06:50.005174 1273271 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:06:50.079292 1273271 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:06:50.079414 1273271 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:06:50.079520 1273271 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:06:50.090564 1273271 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:06:50.096831 1273271 out.go:252]   - Generating certificates and keys ...
	I1002 21:06:50.097032 1273271 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:06:50.097163 1273271 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:06:50.319703 1273271 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:06:50.518433 1273271 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:06:51.225839 1273271 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:06:52.015224 1273271 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:06:52.191661 1273271 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:06:52.191853 1273271 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-806706 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:06:52.478313 1273271 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:06:52.478485 1273271 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-806706 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:06:53.092568 1273271 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:06:53.748457 1273271 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:06:54.512186 1273271 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:06:54.512489 1273271 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:06:54.707117 1273271 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:06:55.257407 1273271 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:06:55.406325 1273271 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:06:55.758695 1273271 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:06:56.231496 1273271 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:06:56.232196 1273271 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:06:56.237323 1273271 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:06:56.241082 1273271 out.go:252]   - Booting up control plane ...
	I1002 21:06:56.241201 1273271 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:06:56.241283 1273271 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:06:56.241353 1273271 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:06:56.255803 1273271 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:06:56.255918 1273271 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:06:56.263894 1273271 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:06:56.264287 1273271 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:06:56.264520 1273271 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:06:56.397852 1273271 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:06:56.397985 1273271 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:06:57.899682 1273271 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501912544s
	I1002 21:06:57.905715 1273271 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:06:57.905819 1273271 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:06:57.905918 1273271 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:06:57.906004 1273271 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:07:00.877910 1273271 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.972603979s
	I1002 21:07:03.855188 1273271 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.950330222s
	I1002 21:07:03.909139 1273271 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001999277s
	I1002 21:07:03.932131 1273271 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:07:04.450147 1273271 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:07:04.469424 1273271 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:07:04.469656 1273271 kubeadm.go:318] [mark-control-plane] Marking the node addons-806706 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:07:04.481845 1273271 kubeadm.go:318] [bootstrap-token] Using token: j9kk4w.ed5b2m1m2jv5yn6m
	I1002 21:07:04.486881 1273271 out.go:252]   - Configuring RBAC rules ...
	I1002 21:07:04.487029 1273271 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:07:04.490598 1273271 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:07:04.498400 1273271 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:07:04.505060 1273271 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:07:04.509252 1273271 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:07:04.513648 1273271 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:07:04.645363 1273271 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:07:05.081602 1273271 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:07:05.646339 1273271 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:07:05.647513 1273271 kubeadm.go:318] 
	I1002 21:07:05.647588 1273271 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:07:05.647602 1273271 kubeadm.go:318] 
	I1002 21:07:05.647685 1273271 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:07:05.647695 1273271 kubeadm.go:318] 
	I1002 21:07:05.647723 1273271 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:07:05.647790 1273271 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:07:05.647847 1273271 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:07:05.647857 1273271 kubeadm.go:318] 
	I1002 21:07:05.647922 1273271 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:07:05.647932 1273271 kubeadm.go:318] 
	I1002 21:07:05.647983 1273271 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:07:05.647991 1273271 kubeadm.go:318] 
	I1002 21:07:05.648052 1273271 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:07:05.648138 1273271 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:07:05.648215 1273271 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:07:05.648224 1273271 kubeadm.go:318] 
	I1002 21:07:05.648314 1273271 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:07:05.648399 1273271 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:07:05.648407 1273271 kubeadm.go:318] 
	I1002 21:07:05.648496 1273271 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token j9kk4w.ed5b2m1m2jv5yn6m \
	I1002 21:07:05.648608 1273271 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 21:07:05.648634 1273271 kubeadm.go:318] 	--control-plane 
	I1002 21:07:05.648643 1273271 kubeadm.go:318] 
	I1002 21:07:05.648732 1273271 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:07:05.648740 1273271 kubeadm.go:318] 
	I1002 21:07:05.648832 1273271 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token j9kk4w.ed5b2m1m2jv5yn6m \
	I1002 21:07:05.648948 1273271 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 21:07:05.651796 1273271 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:07:05.652036 1273271 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:07:05.652151 1273271 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:07:05.652176 1273271 cni.go:84] Creating CNI manager for ""
	I1002 21:07:05.652189 1273271 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:07:05.655369 1273271 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:07:05.658320 1273271 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:07:05.662455 1273271 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:07:05.662477 1273271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:07:05.675153 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:07:05.965382 1273271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:07:05.965537 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:05.965618 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-806706 minikube.k8s.io/updated_at=2025_10_02T21_07_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=addons-806706 minikube.k8s.io/primary=true
	I1002 21:07:06.125578 1273271 ops.go:34] apiserver oom_adj: -16
	I1002 21:07:06.125638 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:06.625925 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:07.126273 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:07.625763 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:08.126187 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:08.626001 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:09.126212 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:09.626382 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:09.724208 1273271 kubeadm.go:1113] duration metric: took 3.758721503s to wait for elevateKubeSystemPrivileges
	I1002 21:07:09.724240 1273271 kubeadm.go:402] duration metric: took 19.898272937s to StartCluster
	I1002 21:07:09.724257 1273271 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:07:09.725062 1273271 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 21:07:09.725721 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:07:09.728846 1273271 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:07:09.729348 1273271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:07:09.729753 1273271 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:09.729718 1273271 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 21:07:09.729947 1273271 addons.go:69] Setting yakd=true in profile "addons-806706"
	I1002 21:07:09.729983 1273271 addons.go:238] Setting addon yakd=true in "addons-806706"
	I1002 21:07:09.729986 1273271 addons.go:69] Setting inspektor-gadget=true in profile "addons-806706"
	I1002 21:07:09.730061 1273271 addons.go:69] Setting registry=true in profile "addons-806706"
	I1002 21:07:09.730078 1273271 addons.go:238] Setting addon registry=true in "addons-806706"
	I1002 21:07:09.730091 1273271 addons.go:238] Setting addon inspektor-gadget=true in "addons-806706"
	I1002 21:07:09.730110 1273271 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-806706"
	I1002 21:07:09.730133 1273271 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-806706"
	I1002 21:07:09.730176 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.730415 1273271 addons.go:69] Setting volcano=true in profile "addons-806706"
	I1002 21:07:09.730446 1273271 addons.go:238] Setting addon volcano=true in "addons-806706"
	I1002 21:07:09.730470 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.730865 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.730911 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.735684 1273271 addons.go:69] Setting volumesnapshots=true in profile "addons-806706"
	I1002 21:07:09.735774 1273271 addons.go:238] Setting addon volumesnapshots=true in "addons-806706"
	I1002 21:07:09.735823 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.736377 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.742563 1273271 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-806706"
	I1002 21:07:09.742604 1273271 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-806706"
	I1002 21:07:09.742641 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.743142 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.753365 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.757614 1273271 addons.go:69] Setting cloud-spanner=true in profile "addons-806706"
	I1002 21:07:09.757697 1273271 addons.go:238] Setting addon cloud-spanner=true in "addons-806706"
	I1002 21:07:09.757764 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.758369 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.766491 1273271 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-806706"
	I1002 21:07:09.766563 1273271 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-806706"
	I1002 21:07:09.766595 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.767084 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.768348 1273271 out.go:179] * Verifying Kubernetes components...
	I1002 21:07:09.782435 1273271 addons.go:69] Setting default-storageclass=true in profile "addons-806706"
	I1002 21:07:09.782468 1273271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-806706"
	I1002 21:07:09.782827 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.790332 1273271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:07:09.801005 1273271 addons.go:69] Setting gcp-auth=true in profile "addons-806706"
	I1002 21:07:09.801051 1273271 mustload.go:65] Loading cluster: addons-806706
	I1002 21:07:09.801260 1273271 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:09.801532 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.830728 1273271 addons.go:69] Setting ingress=true in profile "addons-806706"
	I1002 21:07:09.830828 1273271 addons.go:238] Setting addon ingress=true in "addons-806706"
	I1002 21:07:09.830909 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.831543 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	W1002 21:07:09.831924 1273271 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 21:07:09.837925 1273271 addons.go:69] Setting ingress-dns=true in profile "addons-806706"
	I1002 21:07:09.838005 1273271 addons.go:238] Setting addon ingress-dns=true in "addons-806706"
	I1002 21:07:09.838098 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.838623 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.730093 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.853915 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.730020 1273271 addons.go:69] Setting metrics-server=true in profile "addons-806706"
	I1002 21:07:09.878918 1273271 addons.go:238] Setting addon metrics-server=true in "addons-806706"
	I1002 21:07:09.878990 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.730054 1273271 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-806706"
	I1002 21:07:09.879332 1273271 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-806706"
	I1002 21:07:09.879370 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.730099 1273271 addons.go:69] Setting registry-creds=true in profile "addons-806706"
	I1002 21:07:09.886090 1273271 addons.go:238] Setting addon registry-creds=true in "addons-806706"
	I1002 21:07:09.886159 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.730104 1273271 addons.go:69] Setting storage-provisioner=true in profile "addons-806706"
	I1002 21:07:09.895669 1273271 addons.go:238] Setting addon storage-provisioner=true in "addons-806706"
	I1002 21:07:09.890809 1273271 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 21:07:09.891170 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.891362 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.891440 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.730013 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.895814 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.896629 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.929876 1273271 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 21:07:09.929953 1273271 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 21:07:09.930103 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:09.952850 1273271 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 21:07:09.959928 1273271 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 21:07:09.959953 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 21:07:09.960029 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:09.977362 1273271 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-806706"
	I1002 21:07:09.977407 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.977821 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.994762 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 21:07:10.000326 1273271 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 21:07:10.006167 1273271 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 21:07:10.006199 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 21:07:10.006288 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.000330 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 21:07:10.038953 1273271 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 21:07:10.039033 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.049522 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:10.052862 1273271 addons.go:238] Setting addon default-storageclass=true in "addons-806706"
	I1002 21:07:10.052905 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:10.053334 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:10.058910 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:10.066974 1273271 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 21:07:10.067158 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 21:07:10.073589 1273271 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 21:07:10.096435 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 21:07:10.100303 1273271 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 21:07:10.108639 1273271 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 21:07:10.109051 1273271 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 21:07:10.109069 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 21:07:10.109146 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.128325 1273271 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 21:07:10.132315 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 21:07:10.137801 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 21:07:10.138134 1273271 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 21:07:10.148414 1273271 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 21:07:10.148446 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 21:07:10.148518 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.148964 1273271 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 21:07:10.149022 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 21:07:10.149113 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.172501 1273271 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 21:07:10.177719 1273271 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:07:10.178220 1273271 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 21:07:10.178239 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 21:07:10.178302 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.186261 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 21:07:10.186756 1273271 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:07:10.186808 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:07:10.186919 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.202359 1273271 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 21:07:10.205551 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 21:07:10.205868 1273271 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 21:07:10.205884 1273271 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 21:07:10.205965 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.212430 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 21:07:10.220721 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 21:07:10.224508 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 21:07:10.224544 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 21:07:10.224634 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.241296 1273271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:07:10.265326 1273271 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 21:07:10.277651 1273271 out.go:179]   - Using image docker.io/busybox:stable
	I1002 21:07:10.282324 1273271 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 21:07:10.282348 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 21:07:10.282507 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.284492 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.291660 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.293367 1273271 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 21:07:10.296340 1273271 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 21:07:10.296362 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 21:07:10.296428 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.326875 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.330663 1273271 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 21:07:10.333491 1273271 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 21:07:10.333519 1273271 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 21:07:10.333609 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.337041 1273271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:07:10.346084 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.380678 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.407492 1273271 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:07:10.407513 1273271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:07:10.407577 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.410225 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.420135 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.431968 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.439046 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.476759 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.506669 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	W1002 21:07:10.509464 1273271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 21:07:10.509513 1273271 retry.go:31] will retry after 190.507692ms: ssh: handshake failed: EOF
	I1002 21:07:10.520059 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.520832 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.528307 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.532662 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.999392 1273271 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:10.999468 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 21:07:11.008772 1273271 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 21:07:11.008846 1273271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 21:07:11.129987 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 21:07:11.210546 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 21:07:11.222067 1273271 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 21:07:11.222093 1273271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 21:07:11.243106 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:11.246584 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 21:07:11.304723 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 21:07:11.307179 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 21:07:11.322550 1273271 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 21:07:11.322583 1273271 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 21:07:11.336867 1273271 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 21:07:11.336892 1273271 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 21:07:11.367757 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 21:07:11.398189 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 21:07:11.442150 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:07:11.444913 1273271 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 21:07:11.444937 1273271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 21:07:11.464326 1273271 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 21:07:11.464350 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 21:07:11.475588 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 21:07:11.475615 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 21:07:11.479859 1273271 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 21:07:11.479890 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 21:07:11.484304 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:07:11.528238 1273271 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 21:07:11.528265 1273271 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 21:07:11.622120 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 21:07:11.622144 1273271 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 21:07:11.651926 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 21:07:11.651965 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 21:07:11.671553 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 21:07:11.680047 1273271 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 21:07:11.680072 1273271 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 21:07:11.776615 1273271 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 21:07:11.776641 1273271 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 21:07:11.788447 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 21:07:11.788473 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 21:07:11.877313 1273271 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 21:07:11.877357 1273271 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 21:07:11.901739 1273271 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 21:07:11.901778 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 21:07:11.965107 1273271 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 21:07:11.965130 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 21:07:11.967119 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 21:07:11.967158 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 21:07:12.048810 1273271 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.711716204s)
	I1002 21:07:12.049716 1273271 node_ready.go:35] waiting up to 6m0s for node "addons-806706" to be "Ready" ...
	I1002 21:07:12.050249 1273271 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.808908513s)
	I1002 21:07:12.050272 1273271 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 21:07:12.062741 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 21:07:12.095087 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 21:07:12.154680 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 21:07:12.207863 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 21:07:12.207938 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 21:07:12.300769 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.170746365s)
	I1002 21:07:12.494517 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 21:07:12.494593 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 21:07:12.561962 1273271 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-806706" context rescaled to 1 replicas
	I1002 21:07:12.796435 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 21:07:12.796511 1273271 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 21:07:12.921352 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 21:07:12.921425 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 21:07:13.033734 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 21:07:13.033799 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 21:07:13.218598 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 21:07:13.218670 1273271 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 21:07:13.478718 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1002 21:07:14.082903 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:15.922075 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.711493081s)
	I1002 21:07:15.922150 1273271 addons.go:479] Verifying addon ingress=true in "addons-806706"
	I1002 21:07:15.922177 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.679041235s)
	W1002 21:07:15.922218 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:15.922239 1273271 retry.go:31] will retry after 263.658332ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:15.922282 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.675676205s)
	I1002 21:07:15.922396 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.554616841s)
	I1002 21:07:15.922612 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.524390329s)
	I1002 21:07:15.922661 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.480483305s)
	I1002 21:07:15.922807 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.438472211s)
	I1002 21:07:15.922852 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.251274314s)
	I1002 21:07:15.922867 1273271 addons.go:479] Verifying addon registry=true in "addons-806706"
	I1002 21:07:15.922325 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.617580567s)
	I1002 21:07:15.922351 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.615151601s)
	I1002 21:07:15.923288 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.860513825s)
	I1002 21:07:15.923314 1273271 addons.go:479] Verifying addon metrics-server=true in "addons-806706"
	I1002 21:07:15.923401 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.82822872s)
	W1002 21:07:15.923422 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 21:07:15.923434 1273271 retry.go:31] will retry after 127.069562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 21:07:15.923477 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.76872653s)
	I1002 21:07:15.925483 1273271 out.go:179] * Verifying registry addon...
	I1002 21:07:15.925523 1273271 out.go:179] * Verifying ingress addon...
	I1002 21:07:15.929452 1273271 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-806706 service yakd-dashboard -n yakd-dashboard
	
	I1002 21:07:15.930208 1273271 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 21:07:15.931007 1273271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 21:07:15.937619 1273271 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 21:07:15.937644 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:15.939294 1273271 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 21:07:15.939317 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 21:07:15.940231 1273271 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 21:07:16.050735 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 21:07:16.186575 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:16.448179 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:16.450635 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.971808234s)
	I1002 21:07:16.450667 1273271 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-806706"
	I1002 21:07:16.453825 1273271 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 21:07:16.457164 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:16.457771 1273271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 21:07:16.469441 1273271 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 21:07:16.469466 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:16.554220 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:16.936067 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:16.938482 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:16.962498 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:17.434964 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:17.435219 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:17.461022 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:17.810241 1273271 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 21:07:17.810346 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:17.837053 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:17.936838 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:17.937406 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:17.946685 1273271 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 21:07:17.962139 1273271 addons.go:238] Setting addon gcp-auth=true in "addons-806706"
	I1002 21:07:17.962183 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:17.962619 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:17.963390 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:17.980402 1273271 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 21:07:17.980464 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:17.998361 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:18.435543 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:18.435789 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:18.460790 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:18.935028 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:18.935437 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:18.944842 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.89404948s)
	I1002 21:07:18.944919 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.75826717s)
	W1002 21:07:18.944951 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:18.944973 1273271 retry.go:31] will retry after 370.220878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:18.947973 1273271 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 21:07:18.950838 1273271 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 21:07:18.953646 1273271 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 21:07:18.953677 1273271 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 21:07:18.961729 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:18.972133 1273271 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 21:07:18.972214 1273271 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 21:07:18.986019 1273271 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 21:07:18.986106 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 21:07:19.000019 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1002 21:07:19.053851 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:19.316408 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:19.439014 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:19.440050 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:19.467273 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:19.575428 1273271 addons.go:479] Verifying addon gcp-auth=true in "addons-806706"
	I1002 21:07:19.578706 1273271 out.go:179] * Verifying gcp-auth addon...
	I1002 21:07:19.582315 1273271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 21:07:19.591969 1273271 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 21:07:19.591994 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:19.936120 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:19.936284 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:19.962204 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:20.085479 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:20.248020 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:20.248054 1273271 retry.go:31] will retry after 650.71335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:20.434680 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:20.434815 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:20.461477 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:20.585509 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:20.899039 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:20.935834 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:20.936838 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:20.961719 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:21.086141 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:21.435251 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:21.435519 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:21.461651 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:21.553831 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:21.585415 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:21.740320 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:21.740352 1273271 retry.go:31] will retry after 469.524684ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:21.933604 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:21.934867 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:21.962367 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:22.085631 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:22.210869 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:22.434760 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:22.435172 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:22.461548 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:22.586197 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:22.934857 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:22.935044 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:22.961640 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:23.034703 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:23.034733 1273271 retry.go:31] will retry after 1.23076577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:23.085810 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:23.434350 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:23.434498 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:23.461596 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:23.586100 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:23.934659 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:23.934862 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:23.960609 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:24.052529 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:24.085447 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:24.265692 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:24.435088 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:24.435698 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:24.460746 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:24.586314 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:24.934952 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:24.935782 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:24.961387 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:25.078552 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:25.078590 1273271 retry.go:31] will retry after 1.733039225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:25.085535 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:25.434540 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:25.434894 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:25.460603 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:25.585875 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:25.934311 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:25.934449 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:25.961470 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:26.053948 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:26.085707 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:26.434545 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:26.434825 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:26.461502 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:26.585081 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:26.812439 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:26.935927 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:26.936745 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:26.960762 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:27.086254 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:27.435069 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:27.435261 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:27.461472 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:27.585834 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:27.633678 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:27.633710 1273271 retry.go:31] will retry after 1.586831322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:27.934102 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:27.934441 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:27.961836 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:28.085882 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:28.434885 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:28.435371 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:28.461094 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:28.553100 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:28.586364 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:28.934535 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:28.934595 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:28.961385 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:29.085986 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:29.221142 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:29.434838 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:29.435783 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:29.461043 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:29.585659 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:29.935666 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:29.936087 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:29.961447 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:30.066747 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:30.067406 1273271 retry.go:31] will retry after 2.435069948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:30.085499 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:30.434575 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:30.435067 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:30.460755 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:30.585881 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:30.934108 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:30.934448 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:30.961530 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:31.053499 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:31.086308 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:31.433809 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:31.433990 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:31.460885 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:31.585621 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:31.933971 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:31.934624 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:31.962524 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:32.085264 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:32.434192 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:32.435542 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:32.461864 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:32.503251 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:32.585938 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:32.934886 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:32.935193 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:32.961896 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:33.086024 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:33.305039 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:33.305070 1273271 retry.go:31] will retry after 5.195500776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:33.434265 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:33.434611 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:33.461395 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:33.553353 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:33.585515 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:33.934849 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:33.934993 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:33.961894 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:34.086099 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:34.433500 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:34.434559 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:34.461550 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:34.585363 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:34.933926 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:34.934098 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:34.961422 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:35.085574 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:35.434827 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:35.434847 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:35.460814 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:35.553499 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:35.585252 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:35.933091 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:35.933908 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:35.960988 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:36.085731 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:36.434964 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:36.435162 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:36.461183 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:36.585752 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:36.933953 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:36.934338 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:36.961370 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:37.085253 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:37.433813 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:37.433955 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:37.460843 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:37.585411 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:37.934203 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:37.934378 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:37.961869 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:38.053164 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:38.086557 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:38.434398 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:38.434464 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:38.461004 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:38.501394 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:38.585801 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:38.958120 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:38.966795 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:38.967504 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:39.086556 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:39.407833 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:39.407913 1273271 retry.go:31] will retry after 5.402739697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:39.433546 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:39.434161 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:39.460919 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:39.585271 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:39.933637 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:39.933823 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:39.962100 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:40.086378 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:40.433508 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:40.434615 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:40.461833 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:40.552832 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:40.585570 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:40.934059 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:40.934452 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:40.962312 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:41.085766 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:41.433885 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:41.434331 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:41.461318 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:41.585393 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:41.933977 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:41.934139 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:41.961873 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:42.086566 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:42.433884 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:42.434129 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:42.471855 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:42.553265 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:42.586117 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:42.934285 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:42.934602 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:42.962489 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:43.086532 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:43.434217 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:43.434482 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:43.461617 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:43.585940 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:43.933610 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:43.933859 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:43.960669 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:44.086159 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:44.433938 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:44.434001 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:44.460763 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:44.585606 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:44.811674 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:44.935273 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:44.935853 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:44.962130 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:45.061322 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:45.086008 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:45.435675 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:45.436406 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:45.463877 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:45.585711 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:45.802355 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:45.802393 1273271 retry.go:31] will retry after 18.199074495s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:45.933920 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:45.934097 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:45.962333 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:46.086418 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:46.434152 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:46.434214 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:46.461141 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:46.586091 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:46.932980 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:46.933545 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:46.961219 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:47.086002 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:47.433457 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:47.434754 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:47.461758 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:47.552580 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:47.585353 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:47.933972 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:47.933985 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:47.960745 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:48.085673 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:48.433787 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:48.435091 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:48.460641 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:48.585264 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:48.933467 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:48.933910 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:48.961381 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:49.086122 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:49.433208 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:49.433467 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:49.461801 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:49.552910 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:49.585606 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:49.933930 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:49.934230 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:49.961782 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:50.086017 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:50.434475 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:50.434529 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:50.461706 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:50.585113 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:50.934806 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:50.935321 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:50.961760 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:51.086113 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:51.452027 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:51.453686 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:51.558120 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:51.594538 1273271 node_ready.go:49] node "addons-806706" is "Ready"
	I1002 21:07:51.594572 1273271 node_ready.go:38] duration metric: took 39.544819954s for node "addons-806706" to be "Ready" ...
	I1002 21:07:51.594586 1273271 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:07:51.594654 1273271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:07:51.598271 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:51.617604 1273271 api_server.go:72] duration metric: took 41.888712651s to wait for apiserver process to appear ...
	I1002 21:07:51.617629 1273271 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:07:51.617649 1273271 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 21:07:51.639019 1273271 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 21:07:51.646107 1273271 api_server.go:141] control plane version: v1.34.1
	I1002 21:07:51.646139 1273271 api_server.go:131] duration metric: took 28.503568ms to wait for apiserver health ...
	I1002 21:07:51.646148 1273271 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:07:51.664630 1273271 system_pods.go:59] 19 kube-system pods found
	I1002 21:07:51.664715 1273271 system_pods.go:61] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending
	I1002 21:07:51.664737 1273271 system_pods.go:61] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending
	I1002 21:07:51.664758 1273271 system_pods.go:61] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending
	I1002 21:07:51.664793 1273271 system_pods.go:61] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending
	I1002 21:07:51.664818 1273271 system_pods.go:61] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:51.664840 1273271 system_pods.go:61] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:51.664876 1273271 system_pods.go:61] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:51.664899 1273271 system_pods.go:61] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:51.664922 1273271 system_pods.go:61] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:51.664957 1273271 system_pods.go:61] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:51.664982 1273271 system_pods.go:61] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:51.665001 1273271 system_pods.go:61] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending
	I1002 21:07:51.665038 1273271 system_pods.go:61] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending
	I1002 21:07:51.665061 1273271 system_pods.go:61] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending
	I1002 21:07:51.665081 1273271 system_pods.go:61] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending
	I1002 21:07:51.665112 1273271 system_pods.go:61] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending
	I1002 21:07:51.665133 1273271 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending
	I1002 21:07:51.665151 1273271 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending
	I1002 21:07:51.665175 1273271 system_pods.go:61] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:07:51.665209 1273271 system_pods.go:74] duration metric: took 19.05453ms to wait for pod list to return data ...
	I1002 21:07:51.665231 1273271 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:07:51.673827 1273271 default_sa.go:45] found service account: "default"
	I1002 21:07:51.673918 1273271 default_sa.go:55] duration metric: took 8.665447ms for default service account to be created ...
	I1002 21:07:51.673944 1273271 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:07:51.679620 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:51.679734 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending
	I1002 21:07:51.679757 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending
	I1002 21:07:51.679776 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending
	I1002 21:07:51.679811 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending
	I1002 21:07:51.679836 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:51.679857 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:51.679895 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:51.679918 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:51.679940 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:51.679972 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:51.680000 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:51.680020 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending
	I1002 21:07:51.680053 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending
	I1002 21:07:51.680074 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending
	I1002 21:07:51.680092 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending
	I1002 21:07:51.680112 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending
	I1002 21:07:51.680144 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending
	I1002 21:07:51.680162 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending
	I1002 21:07:51.680184 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:07:51.680230 1273271 retry.go:31] will retry after 197.900028ms: missing components: kube-dns
	I1002 21:07:51.931125 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:51.931206 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:07:51.931230 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending
	I1002 21:07:51.931270 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:07:51.931292 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending
	I1002 21:07:51.931313 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:51.931331 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:51.931362 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:51.931386 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:51.931407 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:51.931442 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:51.931468 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:51.931486 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending
	I1002 21:07:51.931521 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending
	I1002 21:07:51.931545 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:07:51.931562 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending
	I1002 21:07:51.931597 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending
	I1002 21:07:51.931618 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending
	I1002 21:07:51.931636 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending
	I1002 21:07:51.931656 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:07:51.931906 1273271 retry.go:31] will retry after 257.404003ms: missing components: kube-dns
	I1002 21:07:51.944503 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:51.945234 1273271 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 21:07:51.945251 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:51.966275 1273271 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 21:07:51.966299 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:52.091534 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:52.194857 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:52.194895 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:07:52.194905 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 21:07:52.194913 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:07:52.194920 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 21:07:52.194924 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:52.194930 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:52.194936 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:52.194944 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:52.194950 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:52.194959 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:52.194964 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:52.194972 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 21:07:52.194983 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 21:07:52.194994 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:07:52.195004 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 21:07:52.195008 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending
	I1002 21:07:52.195015 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.195022 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.195027 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:07:52.195045 1273271 retry.go:31] will retry after 398.554495ms: missing components: kube-dns
	I1002 21:07:52.436222 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:52.437591 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:52.553750 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:52.589857 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:52.602627 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:52.602672 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:07:52.602682 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 21:07:52.602690 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:07:52.602696 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 21:07:52.602701 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:52.602707 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:52.602715 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:52.602725 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:52.602746 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:52.602762 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:52.602767 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:52.602777 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 21:07:52.602788 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 21:07:52.602798 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:07:52.602804 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 21:07:52.602822 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 21:07:52.602832 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.602843 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.602848 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Running
	I1002 21:07:52.602868 1273271 retry.go:31] will retry after 381.418125ms: missing components: kube-dns
	I1002 21:07:52.936588 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:52.937048 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:52.961695 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:52.991005 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:52.991048 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:07:52.991066 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 21:07:52.991074 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:07:52.991086 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 21:07:52.991097 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:52.991103 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:52.991107 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:52.991124 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:52.991139 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:52.991143 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:52.991153 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:52.991159 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 21:07:52.991166 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 21:07:52.991180 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:07:52.991201 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 21:07:52.991213 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 21:07:52.991219 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.991225 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.991234 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Running
	I1002 21:07:52.991250 1273271 retry.go:31] will retry after 659.745459ms: missing components: kube-dns
	I1002 21:07:53.087137 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:53.436921 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:53.437809 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:53.461845 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:53.586454 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:53.677078 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:53.677116 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Running
	I1002 21:07:53.677127 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 21:07:53.677135 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:07:53.677145 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 21:07:53.677149 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:53.677154 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:53.677159 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:53.677168 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:53.677176 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:53.677185 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:53.677189 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:53.677195 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 21:07:53.677202 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 21:07:53.677210 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:07:53.677217 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 21:07:53.677229 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 21:07:53.677236 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:53.677242 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:53.677250 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Running
	I1002 21:07:53.677260 1273271 system_pods.go:126] duration metric: took 2.003296946s to wait for k8s-apps to be running ...
	I1002 21:07:53.677272 1273271 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:07:53.677374 1273271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:07:53.695228 1273271 system_svc.go:56] duration metric: took 17.945251ms WaitForService to wait for kubelet
	I1002 21:07:53.695270 1273271 kubeadm.go:586] duration metric: took 43.96638259s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:07:53.695291 1273271 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:07:53.698813 1273271 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:07:53.698851 1273271 node_conditions.go:123] node cpu capacity is 2
	I1002 21:07:53.698864 1273271 node_conditions.go:105] duration metric: took 3.567692ms to run NodePressure ...
	I1002 21:07:53.698879 1273271 start.go:241] waiting for startup goroutines ...
	I1002 21:07:53.935622 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:53.936408 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:53.962423 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:54.089341 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:54.435173 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:54.435424 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:54.461449 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:54.585780 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:54.935836 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:54.936150 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:54.962024 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:55.086486 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:55.435967 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:55.436843 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:55.461725 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:55.586236 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:55.935807 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:55.936985 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:55.961388 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:56.085964 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:56.436172 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:56.437135 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:56.462508 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:56.586251 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:56.933921 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:56.935085 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:56.961650 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:57.085751 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:57.434904 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:57.435085 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:57.461020 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:57.585836 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:57.937264 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:57.937421 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:57.962401 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:58.085634 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:58.433851 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:58.434285 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:58.461299 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:58.586150 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:58.935784 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:58.936227 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:58.962099 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:59.086510 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:59.435427 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:59.435732 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:59.461315 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:59.586606 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:59.935657 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:59.935991 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:59.961347 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:00.094823 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:00.436630 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:00.440246 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:00.537712 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:00.589105 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:00.936275 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:00.939310 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:00.961830 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:01.085956 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:01.434475 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:01.435954 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:01.461429 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:01.585480 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:01.934848 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:01.936751 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:01.961355 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:02.085596 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:02.434562 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:02.434645 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:02.462165 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:02.586130 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:02.936281 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:02.936890 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:02.961128 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:03.089851 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:03.433559 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:03.434333 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:03.461597 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:03.586155 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:03.935759 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:03.936151 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:03.962404 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:04.002486 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:08:04.085422 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:04.435503 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:04.435937 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:04.461215 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:04.585838 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:04.935214 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:04.935251 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:04.961296 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:05.087612 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:05.232323 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.229794664s)
	W1002 21:08:05.232360 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:08:05.232392 1273271 retry.go:31] will retry after 21.927356605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:08:05.437472 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:05.437579 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:05.461514 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:05.585233 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:05.933700 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:05.933885 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:05.961179 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:06.086109 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:06.433489 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:06.434505 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:06.464644 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:06.586607 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:06.936555 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:06.937081 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:06.961722 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:07.087512 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:07.440296 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:07.440531 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:07.463813 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:07.586597 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:07.935897 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:07.936141 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:07.963000 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:08.086271 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:08.433669 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:08.434325 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:08.461705 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:08.585849 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:08.934986 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:08.935502 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:08.962431 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:09.085416 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:09.434715 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:09.435128 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:09.461286 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:09.585251 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:09.935779 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:09.935946 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:09.961910 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:10.086654 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:10.435293 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:10.436495 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:10.461638 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:10.585719 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:10.936253 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:10.936358 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:10.961853 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:11.086401 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:11.435333 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:11.436117 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:11.461941 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:11.586141 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:11.933105 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:11.935094 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:11.961806 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:12.085851 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:12.434869 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:12.435200 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:12.461513 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:12.586132 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:12.936070 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:12.936441 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:12.962451 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:13.086400 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:13.435927 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:13.436372 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:13.461264 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:13.584893 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:13.935234 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:13.935667 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:13.962212 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:14.085509 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:14.433572 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:14.434981 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:14.462087 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:14.586067 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:14.934840 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:14.934973 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:14.960929 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:15.085783 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:15.435353 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:15.435554 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:15.462123 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:15.585589 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:15.937855 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:15.938165 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:15.961618 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:16.085932 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:16.432883 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:16.434758 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:16.460812 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:16.585572 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:16.935716 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:16.936139 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:16.962478 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:17.085510 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:17.436248 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:17.436392 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:17.464981 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:17.586306 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:17.934174 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:17.934269 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:17.961495 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:18.085289 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:18.435023 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:18.436117 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:18.461851 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:18.586304 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:18.935122 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:18.935413 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:18.962294 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:19.085876 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:19.435022 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:19.435458 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:19.461466 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:19.585443 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:19.936048 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:19.936431 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:19.962207 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:20.085986 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:20.435550 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:20.436261 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:20.461569 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:20.585170 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:20.936339 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:20.936880 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:20.963714 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:21.085710 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:21.434942 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:21.435100 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:21.461048 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:21.585963 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:21.936407 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:21.936809 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:21.961206 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:22.085630 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:22.434772 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:22.435051 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:22.461225 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:22.585010 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:22.934527 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:22.934543 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:22.961884 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:23.086270 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:23.433792 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:23.436089 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:23.461299 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:23.586063 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:23.933782 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:23.934014 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:23.960999 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:24.086454 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:24.435801 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:24.436382 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:24.461704 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:24.585532 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:24.935200 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:24.935325 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:24.964825 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:25.085936 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:25.438178 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:25.438751 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:25.461753 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:25.586340 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:25.935308 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:25.935738 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:25.961837 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:26.087334 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:26.435665 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:26.436140 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:26.460940 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:26.586429 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:26.935644 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:26.936206 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:26.962687 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:27.086554 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:27.160867 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:08:27.435115 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:27.435287 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:27.461345 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:27.585276 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:27.939012 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:27.939195 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:28.039185 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:28.086480 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:28.433271 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:28.435201 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:28.437614 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.276713519s)
	W1002 21:08:28.437685 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:08:28.437718 1273271 retry.go:31] will retry after 42.484158576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:08:28.461235 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:28.586057 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:28.935061 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:28.935160 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:28.961437 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:29.086099 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:29.437724 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:29.437980 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:29.460922 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:29.585729 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:29.945792 1273271 kapi.go:107] duration metric: took 1m14.014779115s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 21:08:29.946312 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:29.961562 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:30.086013 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:30.434015 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:30.535289 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:30.585944 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:30.934584 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:30.961415 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:31.085771 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:31.434208 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:31.461429 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:31.585531 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:31.934197 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:31.961705 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:32.086804 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:32.434439 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:32.462932 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:32.585993 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:32.948916 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:32.969522 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:33.085919 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:33.433046 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:33.461630 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:33.586179 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:33.934561 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:33.961858 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:34.086699 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:34.434680 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:34.461289 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:34.585597 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:34.934539 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:34.961388 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:35.086112 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:35.434063 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:35.461520 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:35.585568 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:35.935045 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:35.962575 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:36.086081 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:36.434638 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:36.462260 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:36.586447 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:36.943362 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:36.981528 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:37.086061 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:37.433958 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:37.474403 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:37.586385 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:37.933577 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:37.961741 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:38.087925 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:38.434598 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:38.461907 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:38.586748 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:38.938154 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:38.963083 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:39.086323 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:39.434819 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:39.461461 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:39.586367 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:39.940729 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:39.962502 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:40.086645 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:40.434353 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:40.461522 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:40.585871 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:40.934410 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:40.961532 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:41.086270 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:41.434074 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:41.461266 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:41.585155 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:41.952452 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:41.962615 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:42.086466 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:42.434290 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:42.461345 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:42.585318 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:42.934251 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:42.961541 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:43.086662 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:43.434222 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:43.461518 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:43.585662 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:43.934912 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:43.960957 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:44.086700 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:44.434078 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:44.461344 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:44.588080 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:44.936421 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:44.962149 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:45.091158 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:45.433905 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:45.463091 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:45.587861 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:45.934003 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:45.961124 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:46.085508 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:46.433563 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:46.462184 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:46.586783 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:46.934470 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:46.962023 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:47.085937 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:47.436095 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:47.461386 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:47.585488 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:47.940078 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:48.035238 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:48.086179 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:48.433386 1273271 kapi.go:107] duration metric: took 1m32.503175872s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 21:08:48.461565 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:48.586166 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:48.961827 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:49.090305 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:49.461980 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:49.585792 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:49.967892 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:50.086235 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:50.462091 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:50.586328 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:50.965666 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:51.091329 1273271 kapi.go:107] duration metric: took 1m31.509013187s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 21:08:51.094311 1273271 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-806706 cluster.
	I1002 21:08:51.097720 1273271 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 21:08:51.103013 1273271 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 21:08:51.461617 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:51.962923 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:52.465117 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:52.961449 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:53.462055 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:53.961600 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:54.463986 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:54.976419 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:55.463354 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:55.963530 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:56.461539 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:56.961216 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:57.464496 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:57.971108 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:58.461875 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:58.962795 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:59.463317 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:59.977176 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:09:00.463115 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:09:00.961672 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:09:01.461626 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:09:01.961410 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:09:02.462305 1273271 kapi.go:107] duration metric: took 1m46.004529999s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 21:09:10.922166 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 21:09:11.735248 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:09:11.735348 1273271 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:09:11.738376 1273271 out.go:179] * Enabled addons: registry-creds, cloud-spanner, ingress-dns, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 21:09:11.741160 1273271 addons.go:514] duration metric: took 2m2.011439269s for enable addons: enabled=[registry-creds cloud-spanner ingress-dns storage-provisioner amd-gpu-device-plugin nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 21:09:11.741207 1273271 start.go:246] waiting for cluster config update ...
	I1002 21:09:11.741227 1273271 start.go:255] writing updated cluster config ...
	I1002 21:09:11.741518 1273271 ssh_runner.go:195] Run: rm -f paused
	I1002 21:09:11.745229 1273271 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:09:11.748918 1273271 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.755464 1273271 pod_ready.go:94] pod "coredns-66bc5c9577-pr27b" is "Ready"
	I1002 21:09:11.755488 1273271 pod_ready.go:86] duration metric: took 6.542479ms for pod "coredns-66bc5c9577-pr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.757578 1273271 pod_ready.go:83] waiting for pod "etcd-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.762015 1273271 pod_ready.go:94] pod "etcd-addons-806706" is "Ready"
	I1002 21:09:11.762069 1273271 pod_ready.go:86] duration metric: took 4.471077ms for pod "etcd-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.764260 1273271 pod_ready.go:83] waiting for pod "kube-apiserver-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.768883 1273271 pod_ready.go:94] pod "kube-apiserver-addons-806706" is "Ready"
	I1002 21:09:11.768907 1273271 pod_ready.go:86] duration metric: took 4.616903ms for pod "kube-apiserver-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.771389 1273271 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:12.149512 1273271 pod_ready.go:94] pod "kube-controller-manager-addons-806706" is "Ready"
	I1002 21:09:12.149539 1273271 pod_ready.go:86] duration metric: took 378.124844ms for pod "kube-controller-manager-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:12.349553 1273271 pod_ready.go:83] waiting for pod "kube-proxy-8gptp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:12.749548 1273271 pod_ready.go:94] pod "kube-proxy-8gptp" is "Ready"
	I1002 21:09:12.749628 1273271 pod_ready.go:86] duration metric: took 400.046684ms for pod "kube-proxy-8gptp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:12.949745 1273271 pod_ready.go:83] waiting for pod "kube-scheduler-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:13.349613 1273271 pod_ready.go:94] pod "kube-scheduler-addons-806706" is "Ready"
	I1002 21:09:13.349643 1273271 pod_ready.go:86] duration metric: took 399.871935ms for pod "kube-scheduler-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:13.349658 1273271 pod_ready.go:40] duration metric: took 1.604394828s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:09:13.409644 1273271 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:09:13.412871 1273271 out.go:179] * Done! kubectl is now configured to use "addons-806706" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 21:12:17 addons-806706 crio[832]: time="2025-10-02T21:12:17.840810419Z" level=info msg="Removed container 18c8d60662b17980eb6bd6b9e27d52afa91a52865af305afa76e0fe60a6a3796: kube-system/registry-creds-764b6fb674-v22d7/registry-creds" id=8c1f2a40-664d-492b-8e44-45ef35c947b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.276788554Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-58vbm/POD" id=dd8e5356-4073-4317-bcfb-6b2a4f006ec9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.276860421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.296020988Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-58vbm Namespace:default ID:7b6ea5d8c34ef7a71a2ac805f3cc5b84b02d68bfee562ac10fffd73289c01113 UID:4e85a2c5-ca4b-4ac6-a753-3e2f91aae9dc NetNS:/var/run/netns/c3623d83-60c2-43c5-8feb-802d32a85363 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400007b480}] Aliases:map[]}"
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.296061791Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-58vbm to CNI network \"kindnet\" (type=ptp)"
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.318884844Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-58vbm Namespace:default ID:7b6ea5d8c34ef7a71a2ac805f3cc5b84b02d68bfee562ac10fffd73289c01113 UID:4e85a2c5-ca4b-4ac6-a753-3e2f91aae9dc NetNS:/var/run/netns/c3623d83-60c2-43c5-8feb-802d32a85363 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400007b480}] Aliases:map[]}"
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.319178844Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-58vbm for CNI network kindnet (type=ptp)"
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.328595196Z" level=info msg="Ran pod sandbox 7b6ea5d8c34ef7a71a2ac805f3cc5b84b02d68bfee562ac10fffd73289c01113 with infra container: default/hello-world-app-5d498dc89-58vbm/POD" id=dd8e5356-4073-4317-bcfb-6b2a4f006ec9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.33169446Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0e8f56eb-4f6d-4216-8c62-3477fc179eb2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.332064053Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=0e8f56eb-4f6d-4216-8c62-3477fc179eb2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.332199205Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=0e8f56eb-4f6d-4216-8c62-3477fc179eb2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.333666845Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=b388a497-86ee-4cb3-a84d-4b5ae8480619 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.336111466Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.992684717Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=b388a497-86ee-4cb3-a84d-4b5ae8480619 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:12:33 addons-806706 crio[832]: time="2025-10-02T21:12:33.996004333Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3529ff0f-5698-4c09-a564-d81cd23d28af name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:12:34 addons-806706 crio[832]: time="2025-10-02T21:12:34.006527396Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f39ca035-e1f3-4e99-9795-679198d6e34d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:12:34 addons-806706 crio[832]: time="2025-10-02T21:12:34.02864903Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-58vbm/hello-world-app" id=2b7c025d-82a7-4081-977b-4eccdc8ceec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:12:34 addons-806706 crio[832]: time="2025-10-02T21:12:34.029709794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:12:34 addons-806706 crio[832]: time="2025-10-02T21:12:34.051195214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:12:34 addons-806706 crio[832]: time="2025-10-02T21:12:34.051555659Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4644a3db3e41e701472d4e9bc03a00c5b6f0b5677da28dd147ed7864e57d5021/merged/etc/passwd: no such file or directory"
	Oct 02 21:12:34 addons-806706 crio[832]: time="2025-10-02T21:12:34.051658844Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4644a3db3e41e701472d4e9bc03a00c5b6f0b5677da28dd147ed7864e57d5021/merged/etc/group: no such file or directory"
	Oct 02 21:12:34 addons-806706 crio[832]: time="2025-10-02T21:12:34.052017032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:12:34 addons-806706 crio[832]: time="2025-10-02T21:12:34.087072391Z" level=info msg="Created container a2a95049099bda028b04c97a268ab4523944995d444e3c720350ba9de45fe430: default/hello-world-app-5d498dc89-58vbm/hello-world-app" id=2b7c025d-82a7-4081-977b-4eccdc8ceec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:12:34 addons-806706 crio[832]: time="2025-10-02T21:12:34.0904713Z" level=info msg="Starting container: a2a95049099bda028b04c97a268ab4523944995d444e3c720350ba9de45fe430" id=19e48c3f-8b2a-4915-8c6a-e0ca357e4990 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:12:34 addons-806706 crio[832]: time="2025-10-02T21:12:34.094775324Z" level=info msg="Started container" PID=7150 containerID=a2a95049099bda028b04c97a268ab4523944995d444e3c720350ba9de45fe430 description=default/hello-world-app-5d498dc89-58vbm/hello-world-app id=19e48c3f-8b2a-4915-8c6a-e0ca357e4990 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7b6ea5d8c34ef7a71a2ac805f3cc5b84b02d68bfee562ac10fffd73289c01113
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	a2a95049099bd       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   7b6ea5d8c34ef       hello-world-app-5d498dc89-58vbm            default
	5a6cf79da3ce5       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             17 seconds ago           Exited              registry-creds                           2                   6597cb70708b3       registry-creds-764b6fb674-v22d7            kube-system
	7a62396ff27f2       docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac                                              2 minutes ago            Running             nginx                                    0                   69f11461620ef       nginx                                      default
	d35cf495129f3       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   e988ad414eba0       busybox                                    default
	c49402c09d33e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                   kube-system
	c8b471a8c4840       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                   kube-system
	eac22bc4229e1       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                   kube-system
	2c7540d82769e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                   kube-system
	3da90336e0b30       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                   kube-system
	633694731e206       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            3 minutes ago            Running             gadget                                   0                   b349b665e4124       gadget-jmfns                               gadget
	95f9be3e8c94e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   3f6f92d72955c       gcp-auth-78565c9fb4-x9mnx                  gcp-auth
	f4df7e24e8366       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             3 minutes ago            Running             controller                               0                   70e6b7b8023c6       ingress-nginx-controller-9cc49f96f-h8sxf   ingress-nginx
	3ebf774e4e0f2       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   5f2ff5b3c7b54       csi-hostpath-attacher-0                    kube-system
	7f5b74ab76d02       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   c503a0059a533       snapshot-controller-7d9fbc56b8-ms2zp       kube-system
	6d53e2adbcf08       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   3cf44a22a134c       yakd-dashboard-5ff678cb9-ldmgf             yakd-dashboard
	05feced20bbf8       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               4 minutes ago            Running             cloud-spanner-emulator                   0                   1f09d4910b7ff       cloud-spanner-emulator-85f6b7fc65-l4pxd    default
	54466abff97b0       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   2f0e21390145b       local-path-provisioner-648f6765c9-9pqrj    local-path-storage
	a9334e6a8404e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              4 minutes ago            Running             registry-proxy                           0                   a038d6601e093       registry-proxy-z5g9b                       kube-system
	c8c35f4ab1b00       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   4 minutes ago            Exited              patch                                    0                   da5e43e682b25       ingress-nginx-admission-patch-4cbvw        ingress-nginx
	d663f5ea76e43       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   1256bb536b2eb       metrics-server-85b7d694d7-wbgcl            kube-system
	6678f7b494598       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   f775cf4077c93       snapshot-controller-7d9fbc56b8-rbvm4       kube-system
	9fdc4fbb9694b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago            Running             csi-external-health-monitor-controller   0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                   kube-system
	d8a82613cbbf7       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           4 minutes ago            Running             registry                                 0                   8039a124c8a99       registry-66898fdd98-wlkhd                  kube-system
	a3db0f10ee2bd       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     4 minutes ago            Running             nvidia-device-plugin-ctr                 0                   fbb959e2e6dc3       nvidia-device-plugin-daemonset-x2b9d       kube-system
	da5b4d1a96892       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   4 minutes ago            Exited              create                                   0                   70c138bd7bdd8       ingress-nginx-admission-create-r4gc4       ingress-nginx
	7e58d89c526a1       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   63027a525bee4       csi-hostpath-resizer-0                     kube-system
	b5c710f4aca28       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   aad269cb1f759       kube-ingress-dns-minikube                  kube-system
	96b63ced4cbec       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   40d563d4f9e24       coredns-66bc5c9577-pr27b                   kube-system
	e06a73003aabb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   3d6f692deab7a       storage-provisioner                        kube-system
	c3c9833a8ac94       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   9d5cc90f7db66       kindnet-ssl2c                              kube-system
	3ce054faf2e39       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   0f1c32b3805f8       kube-proxy-8gptp                           kube-system
	d78cdb898250b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   4c45159c76a47       etcd-addons-806706                         kube-system
	edee49f1f2c30       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   7436e82c12abc       kube-controller-manager-addons-806706      kube-system
	5899705446a85       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   42d963a42e386       kube-scheduler-addons-806706               kube-system
	e997bf38b55bf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   15fab211dd370       kube-apiserver-addons-806706               kube-system
	
	
	==> coredns [96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1] <==
	[INFO] 10.244.0.8:53399 - 57026 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001902625s
	[INFO] 10.244.0.8:53399 - 13460 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000183324s
	[INFO] 10.244.0.8:53399 - 30409 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000176948s
	[INFO] 10.244.0.8:57863 - 1696 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149708s
	[INFO] 10.244.0.8:57863 - 1491 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075757s
	[INFO] 10.244.0.8:39269 - 52596 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078801s
	[INFO] 10.244.0.8:39269 - 52391 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000192776s
	[INFO] 10.244.0.8:52532 - 47957 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000125634s
	[INFO] 10.244.0.8:52532 - 47776 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00014266s
	[INFO] 10.244.0.8:47949 - 37240 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001272787s
	[INFO] 10.244.0.8:47949 - 37428 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001381298s
	[INFO] 10.244.0.8:55285 - 46820 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012465s
	[INFO] 10.244.0.8:55285 - 47001 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128366s
	[INFO] 10.244.0.20:39828 - 11535 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276138s
	[INFO] 10.244.0.20:39778 - 12043 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000180452s
	[INFO] 10.244.0.20:56460 - 18671 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079318s
	[INFO] 10.244.0.20:39048 - 26232 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000283564s
	[INFO] 10.244.0.20:53333 - 19997 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000165125s
	[INFO] 10.244.0.20:52896 - 61599 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102095s
	[INFO] 10.244.0.20:34225 - 40566 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002069999s
	[INFO] 10.244.0.20:43159 - 14199 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001978161s
	[INFO] 10.244.0.20:51583 - 16271 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001632625s
	[INFO] 10.244.0.20:49614 - 24774 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001696845s
	[INFO] 10.244.0.24:56897 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000214618s
	[INFO] 10.244.0.24:60900 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000142882s
	
	
	==> describe nodes <==
	Name:               addons-806706
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-806706
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=addons-806706
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_07_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-806706
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-806706"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:07:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-806706
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:12:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:12:11 +0000   Thu, 02 Oct 2025 21:06:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:12:11 +0000   Thu, 02 Oct 2025 21:06:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:12:11 +0000   Thu, 02 Oct 2025 21:06:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:12:11 +0000   Thu, 02 Oct 2025 21:07:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-806706
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdd37abed1cf4461af4aac68f6886a7b
	  System UUID:                97003614-72a7-4911-9ef4-c36e5a51170b
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  default                     cloud-spanner-emulator-85f6b7fc65-l4pxd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  default                     hello-world-app-5d498dc89-58vbm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-jmfns                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  gcp-auth                    gcp-auth-78565c9fb4-x9mnx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-h8sxf    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m20s
	  kube-system                 coredns-66bc5c9577-pr27b                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m25s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 csi-hostpathplugin-r7mrn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 etcd-addons-806706                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m30s
	  kube-system                 kindnet-ssl2c                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m26s
	  kube-system                 kube-apiserver-addons-806706                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-controller-manager-addons-806706       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-8gptp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-scheduler-addons-806706                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 metrics-server-85b7d694d7-wbgcl             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m21s
	  kube-system                 nvidia-device-plugin-daemonset-x2b9d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 registry-66898fdd98-wlkhd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 registry-creds-764b6fb674-v22d7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 registry-proxy-z5g9b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 snapshot-controller-7d9fbc56b8-ms2zp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 snapshot-controller-7d9fbc56b8-rbvm4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  local-path-storage          local-path-provisioner-648f6765c9-9pqrj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-ldmgf              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m24s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m38s (x8 over 5m38s)  kubelet          Node addons-806706 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m38s (x8 over 5m38s)  kubelet          Node addons-806706 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m38s (x8 over 5m38s)  kubelet          Node addons-806706 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m30s                  kubelet          Node addons-806706 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m30s                  kubelet          Node addons-806706 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m30s                  kubelet          Node addons-806706 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m27s                  node-controller  Node addons-806706 event: Registered Node addons-806706 in Controller
	  Normal   NodeReady                4m44s                  kubelet          Node addons-806706 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 20:02] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 21:05] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 21:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee] <==
	{"level":"warn","ts":"2025-10-02T21:07:00.201590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.306672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.352657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.404311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.426964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.455400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.494940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.516633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.542308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.581503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.637430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.672454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.710135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.746148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.767979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.788005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.879832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59228","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:07:10.309798Z","caller":"traceutil/trace.go:172","msg":"trace[213488489] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"104.375095ms","start":"2025-10-02T21:07:10.205410Z","end":"2025-10-02T21:07:10.309785Z","steps":["trace[213488489] 'process raft request'  (duration: 104.339863ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T21:07:10.310017Z","caller":"traceutil/trace.go:172","msg":"trace[267441551] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"104.887578ms","start":"2025-10-02T21:07:10.205121Z","end":"2025-10-02T21:07:10.310009Z","steps":["trace[267441551] 'process raft request'  (duration: 104.551166ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T21:07:16.552367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:16.580785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:38.779139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:38.796366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:38.885761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:38.914238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40694","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [95f9be3e8c94e0deefc712e6b63d8a6800487a0d163b134da855cb302a17c6bf] <==
	2025/10/02 21:08:50 GCP Auth Webhook started!
	2025/10/02 21:09:14 Ready to marshal response ...
	2025/10/02 21:09:14 Ready to write response ...
	2025/10/02 21:09:14 Ready to marshal response ...
	2025/10/02 21:09:14 Ready to write response ...
	2025/10/02 21:09:14 Ready to marshal response ...
	2025/10/02 21:09:14 Ready to write response ...
	2025/10/02 21:09:34 Ready to marshal response ...
	2025/10/02 21:09:34 Ready to write response ...
	2025/10/02 21:09:34 Ready to marshal response ...
	2025/10/02 21:09:34 Ready to write response ...
	2025/10/02 21:09:48 Ready to marshal response ...
	2025/10/02 21:09:48 Ready to write response ...
	2025/10/02 21:09:48 Ready to marshal response ...
	2025/10/02 21:09:48 Ready to write response ...
	2025/10/02 21:09:56 Ready to marshal response ...
	2025/10/02 21:09:56 Ready to write response ...
	2025/10/02 21:10:04 Ready to marshal response ...
	2025/10/02 21:10:04 Ready to write response ...
	2025/10/02 21:10:13 Ready to marshal response ...
	2025/10/02 21:10:13 Ready to write response ...
	2025/10/02 21:12:32 Ready to marshal response ...
	2025/10/02 21:12:32 Ready to write response ...
	
	
	==> kernel <==
	 21:12:35 up  5:54,  0 user,  load average: 0.55, 2.04, 3.11
	Linux addons-806706 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0] <==
	I1002 21:10:30.926404       1 main.go:301] handling current node
	I1002 21:10:40.918503       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:10:40.918550       1 main.go:301] handling current node
	I1002 21:10:50.925118       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:10:50.925220       1 main.go:301] handling current node
	I1002 21:11:00.926764       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:11:00.926800       1 main.go:301] handling current node
	I1002 21:11:10.923059       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:11:10.923094       1 main.go:301] handling current node
	I1002 21:11:20.918209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:11:20.918239       1 main.go:301] handling current node
	I1002 21:11:30.926966       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:11:30.927001       1 main.go:301] handling current node
	I1002 21:11:40.918314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:11:40.918430       1 main.go:301] handling current node
	I1002 21:11:50.919282       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:11:50.919316       1 main.go:301] handling current node
	I1002 21:12:00.924114       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:12:00.924148       1 main.go:301] handling current node
	I1002 21:12:10.918348       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:12:10.918468       1 main.go:301] handling current node
	I1002 21:12:20.919044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:12:20.919081       1 main.go:301] handling current node
	I1002 21:12:30.919914       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:12:30.919945       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c] <==
	W1002 21:07:51.576571       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.230.182:443: connect: connection refused
	E1002 21:07:51.576616       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.230.182:443: connect: connection refused" logger="UnhandledError"
	W1002 21:08:15.781010       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 21:08:15.781166       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 21:08:15.781183       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 21:08:15.784781       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 21:08:15.784874       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 21:08:15.784893       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1002 21:08:36.905794       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.167.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.167.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.167.158:443: connect: connection refused" logger="UnhandledError"
	W1002 21:08:36.905879       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 21:08:36.905930       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 21:08:36.993456       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 21:09:23.718076       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59186: use of closed network connection
	E1002 21:09:24.013967       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59194: use of closed network connection
	I1002 21:09:45.317478       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1002 21:10:11.388894       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1002 21:10:12.807834       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 21:10:13.112988       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.47.9"}
	I1002 21:12:33.180431       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.216.45"}
	
	
	==> kube-controller-manager [edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd] <==
	I1002 21:07:08.799213       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:07:08.799354       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:07:08.799463       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 21:07:08.809270       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 21:07:08.818855       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-806706" podCIDRs=["10.244.0.0/24"]
	I1002 21:07:08.821072       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:07:08.838127       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:07:08.838420       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:07:08.838462       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:07:08.838491       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:07:08.839149       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:07:08.839342       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 21:07:08.842121       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:07:08.846597       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 21:07:14.925919       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 21:07:38.771882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 21:07:38.772051       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 21:07:38.772092       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 21:07:38.854332       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 21:07:38.859328       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 21:07:38.872469       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:07:39.059645       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:07:53.985651       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1002 21:08:08.877490       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 21:08:09.067573       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33] <==
	I1002 21:07:10.747352       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:07:10.863715       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:07:10.963951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:07:10.963993       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 21:07:10.964068       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:07:11.050193       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:07:11.050250       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:07:11.059032       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:07:11.059414       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:07:11.059429       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:07:11.061024       1 config.go:200] "Starting service config controller"
	I1002 21:07:11.061036       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:07:11.061068       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:07:11.061072       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:07:11.061088       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:07:11.061092       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:07:11.061798       1 config.go:309] "Starting node config controller"
	I1002 21:07:11.061806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:07:11.061813       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:07:11.162449       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:07:11.162489       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:07:11.162528       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d] <==
	I1002 21:07:01.590329       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:07:03.817764       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:07:03.817874       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:07:03.823930       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:07:03.824073       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:07:03.824158       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:07:03.824195       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:07:03.824244       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:07:03.824274       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:07:03.827710       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:07:03.827794       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:07:03.925636       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:07:03.925636       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:07:03.925653       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:12:01 addons-806706 kubelet[1267]: W1002 21:12:01.968543    1267 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/crio-6597cb70708b358b3b9072799b33e9537b16fe0374e11c4ddca49e3b7e951969 WatchSource:0}: Error finding container 6597cb70708b358b3b9072799b33e9537b16fe0374e11c4ddca49e3b7e951969: Status 404 returned error can't find the container with id 6597cb70708b358b3b9072799b33e9537b16fe0374e11c4ddca49e3b7e951969
	Oct 02 21:12:03 addons-806706 kubelet[1267]: I1002 21:12:03.103673    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-x2b9d" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 21:12:03 addons-806706 kubelet[1267]: I1002 21:12:03.765144    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-v22d7" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 21:12:03 addons-806706 kubelet[1267]: I1002 21:12:03.765202    1267 scope.go:117] "RemoveContainer" containerID="4c2520854a3175f58a40d8dfb61a60122ed0ddd5d4ed0e9a95cc491c8cc0b405"
	Oct 02 21:12:03 addons-806706 kubelet[1267]: I1002 21:12:03.795529    1267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=108.917830409 podStartE2EDuration="1m50.795500615s" podCreationTimestamp="2025-10-02 21:10:13 +0000 UTC" firstStartedPulling="2025-10-02 21:10:13.385624997 +0000 UTC m=+188.495381556" lastFinishedPulling="2025-10-02 21:10:15.263295195 +0000 UTC m=+190.373051762" observedRunningTime="2025-10-02 21:10:15.412842925 +0000 UTC m=+190.522599500" watchObservedRunningTime="2025-10-02 21:12:03.795500615 +0000 UTC m=+298.905257174"
	Oct 02 21:12:04 addons-806706 kubelet[1267]: I1002 21:12:04.770912    1267 scope.go:117] "RemoveContainer" containerID="4c2520854a3175f58a40d8dfb61a60122ed0ddd5d4ed0e9a95cc491c8cc0b405"
	Oct 02 21:12:04 addons-806706 kubelet[1267]: I1002 21:12:04.771039    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-v22d7" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 21:12:04 addons-806706 kubelet[1267]: I1002 21:12:04.771772    1267 scope.go:117] "RemoveContainer" containerID="18c8d60662b17980eb6bd6b9e27d52afa91a52865af305afa76e0fe60a6a3796"
	Oct 02 21:12:04 addons-806706 kubelet[1267]: E1002 21:12:04.772030    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-v22d7_kube-system(a75a4bb4-6459-4c36-9ae4-df35421d3a30)\"" pod="kube-system/registry-creds-764b6fb674-v22d7" podUID="a75a4bb4-6459-4c36-9ae4-df35421d3a30"
	Oct 02 21:12:05 addons-806706 kubelet[1267]: I1002 21:12:05.776062    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-v22d7" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 21:12:05 addons-806706 kubelet[1267]: I1002 21:12:05.776115    1267 scope.go:117] "RemoveContainer" containerID="18c8d60662b17980eb6bd6b9e27d52afa91a52865af305afa76e0fe60a6a3796"
	Oct 02 21:12:05 addons-806706 kubelet[1267]: E1002 21:12:05.776259    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-v22d7_kube-system(a75a4bb4-6459-4c36-9ae4-df35421d3a30)\"" pod="kube-system/registry-creds-764b6fb674-v22d7" podUID="a75a4bb4-6459-4c36-9ae4-df35421d3a30"
	Oct 02 21:12:17 addons-806706 kubelet[1267]: I1002 21:12:17.104462    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-v22d7" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 21:12:17 addons-806706 kubelet[1267]: I1002 21:12:17.104972    1267 scope.go:117] "RemoveContainer" containerID="18c8d60662b17980eb6bd6b9e27d52afa91a52865af305afa76e0fe60a6a3796"
	Oct 02 21:12:17 addons-806706 kubelet[1267]: I1002 21:12:17.819869    1267 scope.go:117] "RemoveContainer" containerID="18c8d60662b17980eb6bd6b9e27d52afa91a52865af305afa76e0fe60a6a3796"
	Oct 02 21:12:17 addons-806706 kubelet[1267]: I1002 21:12:17.820180    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-v22d7" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 21:12:17 addons-806706 kubelet[1267]: I1002 21:12:17.820229    1267 scope.go:117] "RemoveContainer" containerID="5a6cf79da3ce59ebd5a47eded41891313015b650ed300501ab629e2587a39ed4"
	Oct 02 21:12:17 addons-806706 kubelet[1267]: E1002 21:12:17.820391    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-v22d7_kube-system(a75a4bb4-6459-4c36-9ae4-df35421d3a30)\"" pod="kube-system/registry-creds-764b6fb674-v22d7" podUID="a75a4bb4-6459-4c36-9ae4-df35421d3a30"
	Oct 02 21:12:24 addons-806706 kubelet[1267]: I1002 21:12:24.104321    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-z5g9b" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 21:12:26 addons-806706 kubelet[1267]: I1002 21:12:26.104105    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-wlkhd" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 21:12:31 addons-806706 kubelet[1267]: I1002 21:12:31.103980    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-v22d7" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 21:12:31 addons-806706 kubelet[1267]: I1002 21:12:31.104056    1267 scope.go:117] "RemoveContainer" containerID="5a6cf79da3ce59ebd5a47eded41891313015b650ed300501ab629e2587a39ed4"
	Oct 02 21:12:31 addons-806706 kubelet[1267]: E1002 21:12:31.104220    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-v22d7_kube-system(a75a4bb4-6459-4c36-9ae4-df35421d3a30)\"" pod="kube-system/registry-creds-764b6fb674-v22d7" podUID="a75a4bb4-6459-4c36-9ae4-df35421d3a30"
	Oct 02 21:12:33 addons-806706 kubelet[1267]: I1002 21:12:33.081284    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4e85a2c5-ca4b-4ac6-a753-3e2f91aae9dc-gcp-creds\") pod \"hello-world-app-5d498dc89-58vbm\" (UID: \"4e85a2c5-ca4b-4ac6-a753-3e2f91aae9dc\") " pod="default/hello-world-app-5d498dc89-58vbm"
	Oct 02 21:12:33 addons-806706 kubelet[1267]: I1002 21:12:33.081358    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sp7k\" (UniqueName: \"kubernetes.io/projected/4e85a2c5-ca4b-4ac6-a753-3e2f91aae9dc-kube-api-access-2sp7k\") pod \"hello-world-app-5d498dc89-58vbm\" (UID: \"4e85a2c5-ca4b-4ac6-a753-3e2f91aae9dc\") " pod="default/hello-world-app-5d498dc89-58vbm"
	
	
	==> storage-provisioner [e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd] <==
	W1002 21:12:10.121576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:12.128118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:12.135022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:14.138691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:14.143172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:16.145874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:16.150521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:18.153506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:18.160286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:20.164411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:20.172017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:22.175148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:22.184868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:24.189352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:24.194020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:26.197053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:26.201548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:28.204752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:28.211636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:30.215317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:30.220145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:32.223927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:32.228869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:34.237700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:12:34.252199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-806706 -n addons-806706
helpers_test.go:269: (dbg) Run:  kubectl --context addons-806706 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-r4gc4 ingress-nginx-admission-patch-4cbvw
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-806706 describe pod ingress-nginx-admission-create-r4gc4 ingress-nginx-admission-patch-4cbvw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-806706 describe pod ingress-nginx-admission-create-r4gc4 ingress-nginx-admission-patch-4cbvw: exit status 1 (113.275401ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r4gc4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4cbvw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-806706 describe pod ingress-nginx-admission-create-r4gc4 ingress-nginx-admission-patch-4cbvw: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (284.917819ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:12:36.501814 1282770 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:12:36.502591 1282770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:12:36.502623 1282770 out.go:374] Setting ErrFile to fd 2...
	I1002 21:12:36.502628 1282770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:12:36.502923 1282770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:12:36.503243 1282770 mustload.go:65] Loading cluster: addons-806706
	I1002 21:12:36.503661 1282770 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:12:36.503674 1282770 addons.go:606] checking whether the cluster is paused
	I1002 21:12:36.503781 1282770 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:12:36.503797 1282770 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:12:36.504238 1282770 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:12:36.522635 1282770 ssh_runner.go:195] Run: systemctl --version
	I1002 21:12:36.522695 1282770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:12:36.540258 1282770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:12:36.640700 1282770 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:12:36.640793 1282770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:12:36.676661 1282770 cri.go:89] found id: "5a6cf79da3ce59ebd5a47eded41891313015b650ed300501ab629e2587a39ed4"
	I1002 21:12:36.676737 1282770 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:12:36.676756 1282770 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:12:36.676773 1282770 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:12:36.676777 1282770 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:12:36.676784 1282770 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:12:36.676807 1282770 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:12:36.676823 1282770 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:12:36.676831 1282770 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:12:36.676838 1282770 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:12:36.676841 1282770 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:12:36.676844 1282770 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:12:36.676848 1282770 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:12:36.676851 1282770 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:12:36.676854 1282770 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:12:36.676859 1282770 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:12:36.676866 1282770 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:12:36.676870 1282770 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:12:36.676873 1282770 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:12:36.676876 1282770 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:12:36.676881 1282770 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:12:36.676884 1282770 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:12:36.676887 1282770 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:12:36.676904 1282770 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:12:36.676908 1282770 cri.go:89] found id: ""
	I1002 21:12:36.676976 1282770 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:12:36.694023 1282770 out.go:203] 
	W1002 21:12:36.697051 1282770 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:12:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:12:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:12:36.697092 1282770 out.go:285] * 
	* 
	W1002 21:12:36.705974 1282770 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:12:36.709050 1282770 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable ingress --alsologtostderr -v=1: exit status 11 (260.04208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:12:36.771049 1282881 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:12:36.771839 1282881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:12:36.771855 1282881 out.go:374] Setting ErrFile to fd 2...
	I1002 21:12:36.771861 1282881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:12:36.772175 1282881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:12:36.772542 1282881 mustload.go:65] Loading cluster: addons-806706
	I1002 21:12:36.772988 1282881 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:12:36.773010 1282881 addons.go:606] checking whether the cluster is paused
	I1002 21:12:36.773159 1282881 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:12:36.773193 1282881 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:12:36.773690 1282881 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:12:36.791752 1282881 ssh_runner.go:195] Run: systemctl --version
	I1002 21:12:36.791838 1282881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:12:36.811887 1282881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:12:36.908595 1282881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:12:36.908737 1282881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:12:36.940208 1282881 cri.go:89] found id: "5a6cf79da3ce59ebd5a47eded41891313015b650ed300501ab629e2587a39ed4"
	I1002 21:12:36.940232 1282881 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:12:36.940237 1282881 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:12:36.940241 1282881 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:12:36.940245 1282881 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:12:36.940249 1282881 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:12:36.940252 1282881 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:12:36.940255 1282881 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:12:36.940259 1282881 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:12:36.940269 1282881 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:12:36.940274 1282881 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:12:36.940280 1282881 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:12:36.940283 1282881 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:12:36.940287 1282881 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:12:36.940290 1282881 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:12:36.940295 1282881 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:12:36.940303 1282881 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:12:36.940314 1282881 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:12:36.940317 1282881 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:12:36.940320 1282881 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:12:36.940325 1282881 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:12:36.940330 1282881 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:12:36.940333 1282881 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:12:36.940337 1282881 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:12:36.940340 1282881 cri.go:89] found id: ""
	I1002 21:12:36.940390 1282881 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:12:36.956176 1282881 out.go:203] 
	W1002 21:12:36.959094 1282881 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:12:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:12:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:12:36.959119 1282881 out.go:285] * 
	* 
	W1002 21:12:36.968235 1282881 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:12:36.971304 1282881 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jmfns" [082fa482-6ccc-4b76-9e16-d993bc10afff] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.087075618s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (397.416512ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:10:11.673157 1280989 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:10:11.674071 1280989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:11.674123 1280989 out.go:374] Setting ErrFile to fd 2...
	I1002 21:10:11.674145 1280989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:11.674486 1280989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:10:11.674861 1280989 mustload.go:65] Loading cluster: addons-806706
	I1002 21:10:11.675298 1280989 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:11.675351 1280989 addons.go:606] checking whether the cluster is paused
	I1002 21:10:11.675498 1280989 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:11.675535 1280989 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:10:11.676040 1280989 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:10:11.721707 1280989 ssh_runner.go:195] Run: systemctl --version
	I1002 21:10:11.721762 1280989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:10:11.755483 1280989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:10:11.881334 1280989 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:10:11.881425 1280989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:10:11.924747 1280989 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:10:11.924766 1280989 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:10:11.924771 1280989 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:10:11.924775 1280989 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:10:11.924779 1280989 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:10:11.924783 1280989 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:10:11.924786 1280989 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:10:11.924789 1280989 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:10:11.924792 1280989 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:10:11.924798 1280989 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:10:11.924804 1280989 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:10:11.924807 1280989 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:10:11.924810 1280989 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:10:11.924813 1280989 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:10:11.924817 1280989 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:10:11.924821 1280989 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:10:11.924824 1280989 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:10:11.924828 1280989 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:10:11.924831 1280989 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:10:11.924834 1280989 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:10:11.924838 1280989 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:10:11.924841 1280989 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:10:11.924844 1280989 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:10:11.924847 1280989 cri.go:89] found id: ""
	I1002 21:10:11.924896 1280989 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:10:11.951341 1280989 out.go:203] 
	W1002 21:10:11.955592 1280989 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:10:11.955617 1280989 out.go:285] * 
	* 
	W1002 21:10:11.971505 1280989 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:10:11.975223 1280989 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.0626ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007856454s
addons_test.go:463: (dbg) Run:  kubectl --context addons-806706 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (272.656563ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:10:18.153874 1281468 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:10:18.154699 1281468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:18.154714 1281468 out.go:374] Setting ErrFile to fd 2...
	I1002 21:10:18.154719 1281468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:18.154986 1281468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:10:18.155296 1281468 mustload.go:65] Loading cluster: addons-806706
	I1002 21:10:18.155664 1281468 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:18.155681 1281468 addons.go:606] checking whether the cluster is paused
	I1002 21:10:18.155780 1281468 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:18.155796 1281468 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:10:18.156237 1281468 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:10:18.174336 1281468 ssh_runner.go:195] Run: systemctl --version
	I1002 21:10:18.174392 1281468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:10:18.191472 1281468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:10:18.296867 1281468 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:10:18.296972 1281468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:10:18.329063 1281468 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:10:18.329082 1281468 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:10:18.329087 1281468 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:10:18.329090 1281468 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:10:18.329094 1281468 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:10:18.329102 1281468 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:10:18.329106 1281468 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:10:18.329109 1281468 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:10:18.329112 1281468 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:10:18.329118 1281468 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:10:18.329121 1281468 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:10:18.329125 1281468 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:10:18.329129 1281468 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:10:18.329132 1281468 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:10:18.329135 1281468 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:10:18.329140 1281468 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:10:18.329143 1281468 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:10:18.329147 1281468 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:10:18.329149 1281468 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:10:18.329152 1281468 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:10:18.329157 1281468 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:10:18.329160 1281468 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:10:18.329163 1281468 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:10:18.329165 1281468 cri.go:89] found id: ""
	I1002 21:10:18.329218 1281468 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:10:18.345340 1281468 out.go:203] 
	W1002 21:10:18.348463 1281468 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:10:18.348495 1281468 out.go:285] * 
	* 
	W1002 21:10:18.357343 1281468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:10:18.360168 1281468 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 21:09:30.723313 1272514 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 21:09:30.726923 1272514 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 21:09:30.726949 1272514 kapi.go:107] duration metric: took 3.656857ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.666981ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-806706 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-806706 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d80cea1d-2b82-4118-944d-01f74bb82e76] Pending
helpers_test.go:352: "task-pv-pod" [d80cea1d-2b82-4118-944d-01f74bb82e76] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d80cea1d-2b82-4118-944d-01f74bb82e76] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.005195439s
addons_test.go:572: (dbg) Run:  kubectl --context addons-806706 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-806706 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-806706 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-806706 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-806706 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-806706 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-806706 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [17880673-f81c-4abe-8af0-1d693004da3d] Pending
helpers_test.go:352: "task-pv-pod-restore" [17880673-f81c-4abe-8af0-1d693004da3d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [17880673-f81c-4abe-8af0-1d693004da3d] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008843416s
addons_test.go:614: (dbg) Run:  kubectl --context addons-806706 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-806706 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-806706 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (345.367599ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:10:11.947362 1281025 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:10:11.948035 1281025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:11.948053 1281025 out.go:374] Setting ErrFile to fd 2...
	I1002 21:10:11.948059 1281025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:11.948373 1281025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:10:11.948702 1281025 mustload.go:65] Loading cluster: addons-806706
	I1002 21:10:11.949086 1281025 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:11.949105 1281025 addons.go:606] checking whether the cluster is paused
	I1002 21:10:11.949211 1281025 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:11.949232 1281025 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:10:11.949689 1281025 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:10:12.012529 1281025 ssh_runner.go:195] Run: systemctl --version
	I1002 21:10:12.012595 1281025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:10:12.033166 1281025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:10:12.133942 1281025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:10:12.134129 1281025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:10:12.173700 1281025 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:10:12.173723 1281025 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:10:12.173731 1281025 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:10:12.173734 1281025 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:10:12.173737 1281025 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:10:12.173741 1281025 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:10:12.173745 1281025 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:10:12.173748 1281025 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:10:12.173752 1281025 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:10:12.173758 1281025 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:10:12.173761 1281025 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:10:12.173764 1281025 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:10:12.173767 1281025 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:10:12.173771 1281025 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:10:12.173774 1281025 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:10:12.173783 1281025 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:10:12.173789 1281025 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:10:12.173794 1281025 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:10:12.173796 1281025 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:10:12.173800 1281025 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:10:12.173805 1281025 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:10:12.173808 1281025 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:10:12.173811 1281025 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:10:12.173814 1281025 cri.go:89] found id: ""
	I1002 21:10:12.173867 1281025 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:10:12.193095 1281025 out.go:203] 
	W1002 21:10:12.199560 1281025 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:10:12.199605 1281025 out.go:285] * 
	* 
	W1002 21:10:12.208206 1281025 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:10:12.212700 1281025 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (271.06369ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:10:12.279001 1281087 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:10:12.280456 1281087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:12.280515 1281087 out.go:374] Setting ErrFile to fd 2...
	I1002 21:10:12.280528 1281087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:12.280827 1281087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:10:12.281138 1281087 mustload.go:65] Loading cluster: addons-806706
	I1002 21:10:12.281511 1281087 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:12.281528 1281087 addons.go:606] checking whether the cluster is paused
	I1002 21:10:12.281631 1281087 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:12.281651 1281087 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:10:12.282231 1281087 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:10:12.301483 1281087 ssh_runner.go:195] Run: systemctl --version
	I1002 21:10:12.301539 1281087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:10:12.323526 1281087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:10:12.420706 1281087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:10:12.420811 1281087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:10:12.450583 1281087 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:10:12.450693 1281087 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:10:12.450706 1281087 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:10:12.450715 1281087 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:10:12.450744 1281087 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:10:12.450775 1281087 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:10:12.450785 1281087 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:10:12.450789 1281087 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:10:12.450792 1281087 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:10:12.450799 1281087 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:10:12.450811 1281087 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:10:12.450818 1281087 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:10:12.450821 1281087 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:10:12.450824 1281087 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:10:12.450827 1281087 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:10:12.450832 1281087 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:10:12.450835 1281087 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:10:12.450839 1281087 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:10:12.450844 1281087 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:10:12.450847 1281087 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:10:12.450856 1281087 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:10:12.450860 1281087 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:10:12.450862 1281087 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:10:12.450865 1281087 cri.go:89] found id: ""
	I1002 21:10:12.450933 1281087 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:10:12.468462 1281087 out.go:203] 
	W1002 21:10:12.472362 1281087 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:10:12.472389 1281087 out.go:285] * 
	* 
	W1002 21:10:12.481209 1281087 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:10:12.485044 1281087 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (41.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-806706 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-806706 --alsologtostderr -v=1: exit status 11 (274.185999ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:10:02.892078 1280350 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:10:02.892845 1280350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:02.892861 1280350 out.go:374] Setting ErrFile to fd 2...
	I1002 21:10:02.892867 1280350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:02.893198 1280350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:10:02.893554 1280350 mustload.go:65] Loading cluster: addons-806706
	I1002 21:10:02.893992 1280350 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:02.894014 1280350 addons.go:606] checking whether the cluster is paused
	I1002 21:10:02.894231 1280350 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:02.894257 1280350 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:10:02.895066 1280350 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:10:02.914604 1280350 ssh_runner.go:195] Run: systemctl --version
	I1002 21:10:02.914658 1280350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:10:02.931757 1280350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:10:03.029276 1280350 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:10:03.029363 1280350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:10:03.061516 1280350 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:10:03.061541 1280350 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:10:03.061546 1280350 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:10:03.061549 1280350 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:10:03.061553 1280350 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:10:03.061557 1280350 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:10:03.061561 1280350 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:10:03.061564 1280350 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:10:03.061568 1280350 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:10:03.061585 1280350 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:10:03.061597 1280350 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:10:03.061600 1280350 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:10:03.061603 1280350 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:10:03.061607 1280350 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:10:03.061611 1280350 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:10:03.061616 1280350 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:10:03.061623 1280350 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:10:03.061628 1280350 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:10:03.061632 1280350 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:10:03.061635 1280350 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:10:03.061640 1280350 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:10:03.061647 1280350 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:10:03.061658 1280350 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:10:03.061661 1280350 cri.go:89] found id: ""
	I1002 21:10:03.061714 1280350 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:10:03.077197 1280350 out.go:203] 
	W1002 21:10:03.080133 1280350 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:10:03.080166 1280350 out.go:285] * 
	* 
	W1002 21:10:03.089079 1280350 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:10:03.092196 1280350 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-806706 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-806706
helpers_test.go:243: (dbg) docker inspect addons-806706:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326",
	        "Created": "2025-10-02T21:06:39.319392408Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1273669,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:06:39.396538887Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/hostname",
	        "HostsPath": "/var/lib/docker/containers/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/hosts",
	        "LogPath": "/var/lib/docker/containers/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326-json.log",
	        "Name": "/addons-806706",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-806706:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-806706",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326",
	                "LowerDir": "/var/lib/docker/overlay2/0f5ce44aae1a1496ae218ad28067c33dd57e32dc5987ef6130698f22500d4e79-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0f5ce44aae1a1496ae218ad28067c33dd57e32dc5987ef6130698f22500d4e79/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0f5ce44aae1a1496ae218ad28067c33dd57e32dc5987ef6130698f22500d4e79/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0f5ce44aae1a1496ae218ad28067c33dd57e32dc5987ef6130698f22500d4e79/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-806706",
	                "Source": "/var/lib/docker/volumes/addons-806706/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-806706",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-806706",
	                "name.minikube.sigs.k8s.io": "addons-806706",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d72fa3be9f92cc6781c93044512038d9c9312512a7165ebfb0e6bfd1c1cf2449",
	            "SandboxKey": "/var/run/docker/netns/d72fa3be9f92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34271"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34272"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34275"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34273"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34274"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-806706": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:17:88:4d:37:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9d7cb88a1b7da1c76acaec51b35fa75e6ad9973eeb74a743230e10d2aa77d173",
	                    "EndpointID": "1b864a8d2c7536528d3e52fde8072c5e0edbbcc0af0bda67bbb1694b1300cc8e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-806706",
	                        "9be5d6290945"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-806706 -n addons-806706
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-806706 logs -n 25: (1.884681165s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-058204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-058204   │ jenkins │ v1.37.0 │ 02 Oct 25 21:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 21:05 UTC │ 02 Oct 25 21:05 UTC │
	│ delete  │ -p download-only-058204                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-058204   │ jenkins │ v1.37.0 │ 02 Oct 25 21:05 UTC │ 02 Oct 25 21:05 UTC │
	│ start   │ -o=json --download-only -p download-only-638488 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-638488   │ jenkins │ v1.37.0 │ 02 Oct 25 21:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ delete  │ -p download-only-638488                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-638488   │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ delete  │ -p download-only-058204                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-058204   │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ delete  │ -p download-only-638488                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-638488   │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ start   │ --download-only -p download-docker-121503 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-121503 │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │                     │
	│ delete  │ -p download-docker-121503                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-121503 │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ start   │ --download-only -p binary-mirror-339125 --alsologtostderr --binary-mirror http://127.0.0.1:33819 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-339125   │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │                     │
	│ delete  │ -p binary-mirror-339125                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-339125   │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ addons  │ enable dashboard -p addons-806706                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │                     │
	│ addons  │ disable dashboard -p addons-806706                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │                     │
	│ start   │ -p addons-806706 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:09 UTC │
	│ addons  │ addons-806706 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ addons  │ addons-806706 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ addons  │ addons-806706 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ ip      │ addons-806706 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │ 02 Oct 25 21:09 UTC │
	│ addons  │ addons-806706 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ addons  │ addons-806706 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ ssh     │ addons-806706 ssh cat /opt/local-path-provisioner/pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │ 02 Oct 25 21:09 UTC │
	│ addons  │ addons-806706 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ addons  │ addons-806706 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	│ addons  │ enable headlamp -p addons-806706 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-806706          │ jenkins │ v1.37.0 │ 02 Oct 25 21:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:06:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:06:13.121339 1273271 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:06:13.121464 1273271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:06:13.121476 1273271 out.go:374] Setting ErrFile to fd 2...
	I1002 21:06:13.121482 1273271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:06:13.121746 1273271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:06:13.122246 1273271 out.go:368] Setting JSON to false
	I1002 21:06:13.123268 1273271 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20899,"bootTime":1759418275,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 21:06:13.123344 1273271 start.go:140] virtualization:  
	I1002 21:06:13.126799 1273271 out.go:179] * [addons-806706] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:06:13.130740 1273271 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:06:13.130875 1273271 notify.go:220] Checking for updates...
	I1002 21:06:13.136704 1273271 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:06:13.139666 1273271 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 21:06:13.142531 1273271 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 21:06:13.145295 1273271 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:06:13.148362 1273271 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:06:13.151421 1273271 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:06:13.186745 1273271 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:06:13.186952 1273271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:06:13.246945 1273271 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 21:06:13.238002623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:06:13.247060 1273271 docker.go:318] overlay module found
	I1002 21:06:13.250080 1273271 out.go:179] * Using the docker driver based on user configuration
	I1002 21:06:13.252970 1273271 start.go:304] selected driver: docker
	I1002 21:06:13.252988 1273271 start.go:924] validating driver "docker" against <nil>
	I1002 21:06:13.253003 1273271 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:06:13.253740 1273271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:06:13.305284 1273271 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-10-02 21:06:13.295940664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:06:13.305442 1273271 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:06:13.305691 1273271 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:06:13.308441 1273271 out.go:179] * Using Docker driver with root privileges
	I1002 21:06:13.311217 1273271 cni.go:84] Creating CNI manager for ""
	I1002 21:06:13.311287 1273271 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:06:13.311300 1273271 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:06:13.311373 1273271 start.go:348] cluster config:
	{Name:addons-806706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-806706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 21:06:13.314316 1273271 out.go:179] * Starting "addons-806706" primary control-plane node in "addons-806706" cluster
	I1002 21:06:13.317190 1273271 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:06:13.320049 1273271 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:06:13.322838 1273271 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:06:13.322898 1273271 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:06:13.322912 1273271 cache.go:58] Caching tarball of preloaded images
	I1002 21:06:13.322934 1273271 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:06:13.322998 1273271 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:06:13.323008 1273271 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:06:13.323391 1273271 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/config.json ...
	I1002 21:06:13.323423 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/config.json: {Name:mkb1cb32b6df00b640649c3c3bbb07793752531e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:13.338513 1273271 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 21:06:13.338655 1273271 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 21:06:13.338679 1273271 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 21:06:13.338688 1273271 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 21:06:13.338697 1273271 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 21:06:13.338703 1273271 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 21:06:31.767536 1273271 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 21:06:31.767586 1273271 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:06:31.767616 1273271 start.go:360] acquireMachinesLock for addons-806706: {Name:mka9cc2a7600d2ba078caf421120722db5c4e46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:06:31.768340 1273271 start.go:364] duration metric: took 696.085µs to acquireMachinesLock for "addons-806706"
	I1002 21:06:31.768381 1273271 start.go:93] Provisioning new machine with config: &{Name:addons-806706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-806706 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:06:31.768468 1273271 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:06:31.771911 1273271 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 21:06:31.772179 1273271 start.go:159] libmachine.API.Create for "addons-806706" (driver="docker")
	I1002 21:06:31.772238 1273271 client.go:168] LocalClient.Create starting
	I1002 21:06:31.772360 1273271 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem
	I1002 21:06:32.096500 1273271 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem
	I1002 21:06:32.529400 1273271 cli_runner.go:164] Run: docker network inspect addons-806706 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:06:32.548362 1273271 cli_runner.go:211] docker network inspect addons-806706 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:06:32.548472 1273271 network_create.go:284] running [docker network inspect addons-806706] to gather additional debugging logs...
	I1002 21:06:32.548493 1273271 cli_runner.go:164] Run: docker network inspect addons-806706
	W1002 21:06:32.566729 1273271 cli_runner.go:211] docker network inspect addons-806706 returned with exit code 1
	I1002 21:06:32.566776 1273271 network_create.go:287] error running [docker network inspect addons-806706]: docker network inspect addons-806706: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-806706 not found
	I1002 21:06:32.566791 1273271 network_create.go:289] output of [docker network inspect addons-806706]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-806706 not found
	
	** /stderr **
	I1002 21:06:32.566905 1273271 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:06:32.582890 1273271 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b9b450}
	I1002 21:06:32.582938 1273271 network_create.go:124] attempt to create docker network addons-806706 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:06:32.582998 1273271 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-806706 addons-806706
	I1002 21:06:32.642620 1273271 network_create.go:108] docker network addons-806706 192.168.49.0/24 created
	I1002 21:06:32.642657 1273271 kic.go:121] calculated static IP "192.168.49.2" for the "addons-806706" container
	I1002 21:06:32.642743 1273271 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:06:32.662706 1273271 cli_runner.go:164] Run: docker volume create addons-806706 --label name.minikube.sigs.k8s.io=addons-806706 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:06:32.680580 1273271 oci.go:103] Successfully created a docker volume addons-806706
	I1002 21:06:32.680678 1273271 cli_runner.go:164] Run: docker run --rm --name addons-806706-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-806706 --entrypoint /usr/bin/test -v addons-806706:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:06:34.846376 1273271 cli_runner.go:217] Completed: docker run --rm --name addons-806706-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-806706 --entrypoint /usr/bin/test -v addons-806706:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.165658628s)
	I1002 21:06:34.846407 1273271 oci.go:107] Successfully prepared a docker volume addons-806706
	I1002 21:06:34.846434 1273271 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:06:34.846452 1273271 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:06:34.846541 1273271 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-806706:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:06:39.247387 1273271 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-806706:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.400792798s)
	I1002 21:06:39.247424 1273271 kic.go:203] duration metric: took 4.40096744s to extract preloaded images to volume ...
	W1002 21:06:39.247576 1273271 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:06:39.247694 1273271 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:06:39.305141 1273271 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-806706 --name addons-806706 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-806706 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-806706 --network addons-806706 --ip 192.168.49.2 --volume addons-806706:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:06:39.606899 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Running}}
	I1002 21:06:39.629340 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:06:39.659649 1273271 cli_runner.go:164] Run: docker exec addons-806706 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:06:39.734622 1273271 oci.go:144] the created container "addons-806706" has a running status.
	I1002 21:06:39.734654 1273271 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa...
	I1002 21:06:40.011473 1273271 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:06:40.064091 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:06:40.088102 1273271 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:06:40.088128 1273271 kic_runner.go:114] Args: [docker exec --privileged addons-806706 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:06:40.157846 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:06:40.179389 1273271 machine.go:93] provisionDockerMachine start ...
	I1002 21:06:40.179511 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:40.200361 1273271 main.go:141] libmachine: Using SSH client type: native
	I1002 21:06:40.200704 1273271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34271 <nil> <nil>}
	I1002 21:06:40.200721 1273271 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:06:40.201419 1273271 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 21:06:43.335686 1273271 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-806706
	
	I1002 21:06:43.336453 1273271 ubuntu.go:182] provisioning hostname "addons-806706"
	I1002 21:06:43.336535 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:43.353867 1273271 main.go:141] libmachine: Using SSH client type: native
	I1002 21:06:43.354194 1273271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34271 <nil> <nil>}
	I1002 21:06:43.354206 1273271 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-806706 && echo "addons-806706" | sudo tee /etc/hostname
	I1002 21:06:43.496201 1273271 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-806706
	
	I1002 21:06:43.496285 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:43.514653 1273271 main.go:141] libmachine: Using SSH client type: native
	I1002 21:06:43.514974 1273271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34271 <nil> <nil>}
	I1002 21:06:43.514995 1273271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-806706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-806706/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-806706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:06:43.646703 1273271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:06:43.646731 1273271 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 21:06:43.646757 1273271 ubuntu.go:190] setting up certificates
	I1002 21:06:43.646766 1273271 provision.go:84] configureAuth start
	I1002 21:06:43.646842 1273271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-806706
	I1002 21:06:43.663150 1273271 provision.go:143] copyHostCerts
	I1002 21:06:43.663234 1273271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 21:06:43.663376 1273271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 21:06:43.663447 1273271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 21:06:43.663507 1273271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.addons-806706 san=[127.0.0.1 192.168.49.2 addons-806706 localhost minikube]
	I1002 21:06:44.759554 1273271 provision.go:177] copyRemoteCerts
	I1002 21:06:44.759624 1273271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:06:44.759675 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:44.775869 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:06:44.873459 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:06:44.890817 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 21:06:44.907788 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:06:44.924703 1273271 provision.go:87] duration metric: took 1.277909883s to configureAuth
	I1002 21:06:44.924732 1273271 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:06:44.924940 1273271 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:06:44.925047 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:44.941686 1273271 main.go:141] libmachine: Using SSH client type: native
	I1002 21:06:44.942007 1273271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34271 <nil> <nil>}
	I1002 21:06:44.942056 1273271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:06:45.330150 1273271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:06:45.330244 1273271 machine.go:96] duration metric: took 5.150826591s to provisionDockerMachine
	I1002 21:06:45.330283 1273271 client.go:171] duration metric: took 13.558032024s to LocalClient.Create
	I1002 21:06:45.330346 1273271 start.go:167] duration metric: took 13.558165618s to libmachine.API.Create "addons-806706"
	I1002 21:06:45.330382 1273271 start.go:293] postStartSetup for "addons-806706" (driver="docker")
	I1002 21:06:45.330435 1273271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:06:45.330568 1273271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:06:45.330684 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:45.387898 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:06:45.494809 1273271 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:06:45.498593 1273271 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:06:45.498620 1273271 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:06:45.498631 1273271 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 21:06:45.498702 1273271 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 21:06:45.498723 1273271 start.go:296] duration metric: took 168.30807ms for postStartSetup
	I1002 21:06:45.499062 1273271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-806706
	I1002 21:06:45.516293 1273271 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/config.json ...
	I1002 21:06:45.516603 1273271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:06:45.516646 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:45.533460 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:06:45.627370 1273271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:06:45.633584 1273271 start.go:128] duration metric: took 13.865099892s to createHost
	I1002 21:06:45.633610 1273271 start.go:83] releasing machines lock for "addons-806706", held for 13.86525257s
	I1002 21:06:45.633707 1273271 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-806706
	I1002 21:06:45.650269 1273271 ssh_runner.go:195] Run: cat /version.json
	I1002 21:06:45.650330 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:45.650573 1273271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:06:45.650635 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:06:45.674253 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:06:45.677548 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:06:45.773595 1273271 ssh_runner.go:195] Run: systemctl --version
	I1002 21:06:45.867753 1273271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:06:45.904646 1273271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:06:45.909027 1273271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:06:45.909102 1273271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:06:45.937107 1273271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 21:06:45.937129 1273271 start.go:495] detecting cgroup driver to use...
	I1002 21:06:45.937162 1273271 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:06:45.937218 1273271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:06:45.955186 1273271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:06:45.967891 1273271 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:06:45.967992 1273271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:06:45.986053 1273271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:06:46.003855 1273271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:06:46.128412 1273271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:06:46.252519 1273271 docker.go:234] disabling docker service ...
	I1002 21:06:46.252585 1273271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:06:46.273862 1273271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:06:46.286919 1273271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:06:46.395406 1273271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:06:46.516063 1273271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:06:46.529201 1273271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:06:46.543438 1273271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:06:46.543519 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.553027 1273271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:06:46.553107 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.562277 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.571296 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.580279 1273271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:06:46.588197 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.597248 1273271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.610873 1273271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:06:46.619919 1273271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:06:46.627580 1273271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:06:46.635174 1273271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:06:46.754135 1273271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:06:46.876077 1273271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:06:46.876242 1273271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:06:46.880227 1273271 start.go:563] Will wait 60s for crictl version
	I1002 21:06:46.880345 1273271 ssh_runner.go:195] Run: which crictl
	I1002 21:06:46.883879 1273271 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:06:46.907117 1273271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:06:46.907307 1273271 ssh_runner.go:195] Run: crio --version
	I1002 21:06:46.935315 1273271 ssh_runner.go:195] Run: crio --version
	I1002 21:06:46.969733 1273271 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:06:46.972578 1273271 cli_runner.go:164] Run: docker network inspect addons-806706 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:06:46.988815 1273271 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:06:46.992722 1273271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:06:47.003351 1273271 kubeadm.go:883] updating cluster {Name:addons-806706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-806706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:06:47.003508 1273271 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:06:47.003579 1273271 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:06:47.038873 1273271 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:06:47.038895 1273271 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:06:47.038950 1273271 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:06:47.067946 1273271 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:06:47.067969 1273271 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:06:47.067977 1273271 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:06:47.068061 1273271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-806706 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-806706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:06:47.068147 1273271 ssh_runner.go:195] Run: crio config
	I1002 21:06:47.119332 1273271 cni.go:84] Creating CNI manager for ""
	I1002 21:06:47.119355 1273271 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:06:47.119373 1273271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:06:47.119414 1273271 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-806706 NodeName:addons-806706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:06:47.119563 1273271 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-806706"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:06:47.119636 1273271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:06:47.127243 1273271 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:06:47.127312 1273271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:06:47.134887 1273271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 21:06:47.149355 1273271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:06:47.162564 1273271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1002 21:06:47.175947 1273271 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:06:47.179581 1273271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:06:47.189152 1273271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:06:47.294051 1273271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:06:47.310333 1273271 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706 for IP: 192.168.49.2
	I1002 21:06:47.310356 1273271 certs.go:195] generating shared ca certs ...
	I1002 21:06:47.310372 1273271 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:47.310500 1273271 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 21:06:47.717769 1273271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt ...
	I1002 21:06:47.717802 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt: {Name:mkad6e6e4490a5c9a5702e976ad0453b70d21cd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:47.718696 1273271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key ...
	I1002 21:06:47.718717 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key: {Name:mka5069782b2362307f91a95829433ee76cf98fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:47.718873 1273271 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 21:06:48.499587 1273271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt ...
	I1002 21:06:48.499622 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt: {Name:mk98f973c0fc8519a3c830311c50c56d34441e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:48.499817 1273271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key ...
	I1002 21:06:48.499831 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key: {Name:mkd3b7a0d48b4fcac6b64ca401b68027872a7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:48.499920 1273271 certs.go:257] generating profile certs ...
	I1002 21:06:48.499979 1273271 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.key
	I1002 21:06:48.499997 1273271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt with IP's: []
	I1002 21:06:48.909085 1273271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt ...
	I1002 21:06:48.909118 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: {Name:mk8d33efa59629ba32bda29012092ba282d54569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:48.909927 1273271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.key ...
	I1002 21:06:48.909946 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.key: {Name:mk11a1e76e0bf6428500af10fb13698297295501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:48.910624 1273271 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key.f25a6f51
	I1002 21:06:48.910652 1273271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt.f25a6f51 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 21:06:49.184689 1273271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt.f25a6f51 ...
	I1002 21:06:49.184718 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt.f25a6f51: {Name:mkd31106a0539e19ca9a0e5be8892b59bdfc64d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:49.185523 1273271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key.f25a6f51 ...
	I1002 21:06:49.185548 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key.f25a6f51: {Name:mka13dee7cbea5738848e70c38f0c188824e9341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:49.186302 1273271 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt.f25a6f51 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt
	I1002 21:06:49.186395 1273271 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key.f25a6f51 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key
	I1002 21:06:49.186452 1273271 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.key
	I1002 21:06:49.186482 1273271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.crt with IP's: []
	I1002 21:06:49.566788 1273271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.crt ...
	I1002 21:06:49.566825 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.crt: {Name:mkb4b3cf0fa7cc3ad96dcfb6c9caa7554ad2a76c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:49.567020 1273271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.key ...
	I1002 21:06:49.567035 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.key: {Name:mk049779da010e6f379f975bacb7429ea8771c77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:06:49.567880 1273271 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:06:49.567937 1273271 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:06:49.567966 1273271 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:06:49.567992 1273271 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 21:06:49.568563 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:06:49.587519 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:06:49.605597 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:06:49.623508 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:06:49.641194 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 21:06:49.659711 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:06:49.677805 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:06:49.702851 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:06:49.721926 1273271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:06:49.739661 1273271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:06:49.752904 1273271 ssh_runner.go:195] Run: openssl version
	I1002 21:06:49.759200 1273271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:06:49.767563 1273271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:06:49.771451 1273271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:06:49.771546 1273271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:06:49.813667 1273271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:06:49.822204 1273271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:06:49.825919 1273271 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:06:49.825970 1273271 kubeadm.go:400] StartCluster: {Name:addons-806706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-806706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:06:49.826061 1273271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:06:49.826121 1273271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:06:49.856681 1273271 cri.go:89] found id: ""
	I1002 21:06:49.856820 1273271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:06:49.864713 1273271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:06:49.872570 1273271 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:06:49.872634 1273271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:06:49.880463 1273271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:06:49.880481 1273271 kubeadm.go:157] found existing configuration files:
	
	I1002 21:06:49.880532 1273271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:06:49.887967 1273271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:06:49.888057 1273271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:06:49.895164 1273271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:06:49.902843 1273271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:06:49.902963 1273271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:06:49.910192 1273271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:06:49.917620 1273271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:06:49.917687 1273271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:06:49.925026 1273271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:06:49.932616 1273271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:06:49.932702 1273271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:06:49.940014 1273271 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:06:49.979453 1273271 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:06:49.979746 1273271 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:06:50.004532 1273271 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:06:50.004616 1273271 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 21:06:50.004660 1273271 kubeadm.go:318] OS: Linux
	I1002 21:06:50.004711 1273271 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:06:50.004766 1273271 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 21:06:50.004821 1273271 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:06:50.004877 1273271 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:06:50.004956 1273271 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:06:50.005011 1273271 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:06:50.005065 1273271 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:06:50.005120 1273271 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:06:50.005174 1273271 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 21:06:50.079292 1273271 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:06:50.079414 1273271 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:06:50.079520 1273271 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:06:50.090564 1273271 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:06:50.096831 1273271 out.go:252]   - Generating certificates and keys ...
	I1002 21:06:50.097032 1273271 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:06:50.097163 1273271 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:06:50.319703 1273271 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:06:50.518433 1273271 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:06:51.225839 1273271 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:06:52.015224 1273271 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:06:52.191661 1273271 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:06:52.191853 1273271 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-806706 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:06:52.478313 1273271 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:06:52.478485 1273271 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-806706 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:06:53.092568 1273271 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:06:53.748457 1273271 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:06:54.512186 1273271 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:06:54.512489 1273271 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:06:54.707117 1273271 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:06:55.257407 1273271 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:06:55.406325 1273271 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:06:55.758695 1273271 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:06:56.231496 1273271 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:06:56.232196 1273271 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:06:56.237323 1273271 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:06:56.241082 1273271 out.go:252]   - Booting up control plane ...
	I1002 21:06:56.241201 1273271 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:06:56.241283 1273271 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:06:56.241353 1273271 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:06:56.255803 1273271 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:06:56.255918 1273271 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:06:56.263894 1273271 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:06:56.264287 1273271 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:06:56.264520 1273271 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:06:56.397852 1273271 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:06:56.397985 1273271 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:06:57.899682 1273271 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501912544s
	I1002 21:06:57.905715 1273271 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:06:57.905819 1273271 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:06:57.905918 1273271 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:06:57.906004 1273271 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:07:00.877910 1273271 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.972603979s
	I1002 21:07:03.855188 1273271 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.950330222s
	I1002 21:07:03.909139 1273271 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001999277s
	I1002 21:07:03.932131 1273271 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:07:04.450147 1273271 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:07:04.469424 1273271 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:07:04.469656 1273271 kubeadm.go:318] [mark-control-plane] Marking the node addons-806706 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:07:04.481845 1273271 kubeadm.go:318] [bootstrap-token] Using token: j9kk4w.ed5b2m1m2jv5yn6m
	I1002 21:07:04.486881 1273271 out.go:252]   - Configuring RBAC rules ...
	I1002 21:07:04.487029 1273271 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:07:04.490598 1273271 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:07:04.498400 1273271 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:07:04.505060 1273271 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:07:04.509252 1273271 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:07:04.513648 1273271 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:07:04.645363 1273271 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:07:05.081602 1273271 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 21:07:05.646339 1273271 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 21:07:05.647513 1273271 kubeadm.go:318] 
	I1002 21:07:05.647588 1273271 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 21:07:05.647602 1273271 kubeadm.go:318] 
	I1002 21:07:05.647685 1273271 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 21:07:05.647695 1273271 kubeadm.go:318] 
	I1002 21:07:05.647723 1273271 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 21:07:05.647790 1273271 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:07:05.647847 1273271 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:07:05.647857 1273271 kubeadm.go:318] 
	I1002 21:07:05.647922 1273271 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 21:07:05.647932 1273271 kubeadm.go:318] 
	I1002 21:07:05.647983 1273271 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:07:05.647991 1273271 kubeadm.go:318] 
	I1002 21:07:05.648052 1273271 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 21:07:05.648138 1273271 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:07:05.648215 1273271 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:07:05.648224 1273271 kubeadm.go:318] 
	I1002 21:07:05.648314 1273271 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:07:05.648399 1273271 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 21:07:05.648407 1273271 kubeadm.go:318] 
	I1002 21:07:05.648496 1273271 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token j9kk4w.ed5b2m1m2jv5yn6m \
	I1002 21:07:05.648608 1273271 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 21:07:05.648634 1273271 kubeadm.go:318] 	--control-plane 
	I1002 21:07:05.648643 1273271 kubeadm.go:318] 
	I1002 21:07:05.648732 1273271 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:07:05.648740 1273271 kubeadm.go:318] 
	I1002 21:07:05.648832 1273271 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token j9kk4w.ed5b2m1m2jv5yn6m \
	I1002 21:07:05.648948 1273271 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 21:07:05.651796 1273271 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 21:07:05.652036 1273271 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 21:07:05.652151 1273271 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:07:05.652176 1273271 cni.go:84] Creating CNI manager for ""
	I1002 21:07:05.652189 1273271 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:07:05.655369 1273271 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:07:05.658320 1273271 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:07:05.662455 1273271 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:07:05.662477 1273271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:07:05.675153 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:07:05.965382 1273271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:07:05.965537 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:05.965618 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-806706 minikube.k8s.io/updated_at=2025_10_02T21_07_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=addons-806706 minikube.k8s.io/primary=true
	I1002 21:07:06.125578 1273271 ops.go:34] apiserver oom_adj: -16
	I1002 21:07:06.125638 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:06.625925 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:07.126273 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:07.625763 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:08.126187 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:08.626001 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:09.126212 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:09.626382 1273271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:07:09.724208 1273271 kubeadm.go:1113] duration metric: took 3.758721503s to wait for elevateKubeSystemPrivileges
	I1002 21:07:09.724240 1273271 kubeadm.go:402] duration metric: took 19.898272937s to StartCluster
	I1002 21:07:09.724257 1273271 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:07:09.725062 1273271 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 21:07:09.725721 1273271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:07:09.728846 1273271 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:07:09.729348 1273271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:07:09.729753 1273271 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:09.729718 1273271 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 21:07:09.729947 1273271 addons.go:69] Setting yakd=true in profile "addons-806706"
	I1002 21:07:09.729983 1273271 addons.go:238] Setting addon yakd=true in "addons-806706"
	I1002 21:07:09.729986 1273271 addons.go:69] Setting inspektor-gadget=true in profile "addons-806706"
	I1002 21:07:09.730061 1273271 addons.go:69] Setting registry=true in profile "addons-806706"
	I1002 21:07:09.730078 1273271 addons.go:238] Setting addon registry=true in "addons-806706"
	I1002 21:07:09.730091 1273271 addons.go:238] Setting addon inspektor-gadget=true in "addons-806706"
	I1002 21:07:09.730110 1273271 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-806706"
	I1002 21:07:09.730133 1273271 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-806706"
	I1002 21:07:09.730176 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.730415 1273271 addons.go:69] Setting volcano=true in profile "addons-806706"
	I1002 21:07:09.730446 1273271 addons.go:238] Setting addon volcano=true in "addons-806706"
	I1002 21:07:09.730470 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.730865 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.730911 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.735684 1273271 addons.go:69] Setting volumesnapshots=true in profile "addons-806706"
	I1002 21:07:09.735774 1273271 addons.go:238] Setting addon volumesnapshots=true in "addons-806706"
	I1002 21:07:09.735823 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.736377 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.742563 1273271 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-806706"
	I1002 21:07:09.742604 1273271 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-806706"
	I1002 21:07:09.742641 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.743142 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.753365 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.757614 1273271 addons.go:69] Setting cloud-spanner=true in profile "addons-806706"
	I1002 21:07:09.757697 1273271 addons.go:238] Setting addon cloud-spanner=true in "addons-806706"
	I1002 21:07:09.757764 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.758369 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.766491 1273271 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-806706"
	I1002 21:07:09.766563 1273271 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-806706"
	I1002 21:07:09.766595 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.767084 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.768348 1273271 out.go:179] * Verifying Kubernetes components...
	I1002 21:07:09.782435 1273271 addons.go:69] Setting default-storageclass=true in profile "addons-806706"
	I1002 21:07:09.782468 1273271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-806706"
	I1002 21:07:09.782827 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.790332 1273271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:07:09.801005 1273271 addons.go:69] Setting gcp-auth=true in profile "addons-806706"
	I1002 21:07:09.801051 1273271 mustload.go:65] Loading cluster: addons-806706
	I1002 21:07:09.801260 1273271 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:09.801532 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.830728 1273271 addons.go:69] Setting ingress=true in profile "addons-806706"
	I1002 21:07:09.830828 1273271 addons.go:238] Setting addon ingress=true in "addons-806706"
	I1002 21:07:09.830909 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.831543 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	W1002 21:07:09.831924 1273271 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1002 21:07:09.837925 1273271 addons.go:69] Setting ingress-dns=true in profile "addons-806706"
	I1002 21:07:09.838005 1273271 addons.go:238] Setting addon ingress-dns=true in "addons-806706"
	I1002 21:07:09.838098 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.838623 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.730093 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.853915 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.730020 1273271 addons.go:69] Setting metrics-server=true in profile "addons-806706"
	I1002 21:07:09.878918 1273271 addons.go:238] Setting addon metrics-server=true in "addons-806706"
	I1002 21:07:09.878990 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.730054 1273271 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-806706"
	I1002 21:07:09.879332 1273271 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-806706"
	I1002 21:07:09.879370 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.730099 1273271 addons.go:69] Setting registry-creds=true in profile "addons-806706"
	I1002 21:07:09.886090 1273271 addons.go:238] Setting addon registry-creds=true in "addons-806706"
	I1002 21:07:09.886159 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.730104 1273271 addons.go:69] Setting storage-provisioner=true in profile "addons-806706"
	I1002 21:07:09.895669 1273271 addons.go:238] Setting addon storage-provisioner=true in "addons-806706"
	I1002 21:07:09.890809 1273271 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 21:07:09.891170 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.891362 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.891440 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.730013 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.895814 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.896629 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.929876 1273271 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 21:07:09.929953 1273271 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 21:07:09.930103 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:09.952850 1273271 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 21:07:09.959928 1273271 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 21:07:09.959953 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 21:07:09.960029 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:09.977362 1273271 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-806706"
	I1002 21:07:09.977407 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:09.977821 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:09.994762 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 21:07:10.000326 1273271 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 21:07:10.006167 1273271 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 21:07:10.006199 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 21:07:10.006288 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.000330 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 21:07:10.038953 1273271 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 21:07:10.039033 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.049522 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:10.052862 1273271 addons.go:238] Setting addon default-storageclass=true in "addons-806706"
	I1002 21:07:10.052905 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:10.053334 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:10.058910 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:10.066974 1273271 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 21:07:10.067158 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 21:07:10.073589 1273271 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 21:07:10.096435 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 21:07:10.100303 1273271 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 21:07:10.108639 1273271 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 21:07:10.109051 1273271 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 21:07:10.109069 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 21:07:10.109146 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.128325 1273271 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 21:07:10.132315 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 21:07:10.137801 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 21:07:10.138134 1273271 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 21:07:10.148414 1273271 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 21:07:10.148446 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 21:07:10.148518 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.148964 1273271 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 21:07:10.149022 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 21:07:10.149113 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.172501 1273271 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 21:07:10.177719 1273271 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:07:10.178220 1273271 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 21:07:10.178239 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 21:07:10.178302 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.186261 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 21:07:10.186756 1273271 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:07:10.186808 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:07:10.186919 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.202359 1273271 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 21:07:10.205551 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 21:07:10.205868 1273271 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 21:07:10.205884 1273271 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 21:07:10.205965 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.212430 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 21:07:10.220721 1273271 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 21:07:10.224508 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 21:07:10.224544 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 21:07:10.224634 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.241296 1273271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:07:10.265326 1273271 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 21:07:10.277651 1273271 out.go:179]   - Using image docker.io/busybox:stable
	I1002 21:07:10.282324 1273271 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 21:07:10.282348 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 21:07:10.282507 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.284492 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.291660 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.293367 1273271 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 21:07:10.296340 1273271 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 21:07:10.296362 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 21:07:10.296428 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.326875 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.330663 1273271 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 21:07:10.333491 1273271 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 21:07:10.333519 1273271 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 21:07:10.333609 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.337041 1273271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:07:10.346084 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.380678 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.407492 1273271 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:07:10.407513 1273271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:07:10.407577 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:10.410225 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.420135 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.431968 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.439046 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.476759 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.506669 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	W1002 21:07:10.509464 1273271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 21:07:10.509513 1273271 retry.go:31] will retry after 190.507692ms: ssh: handshake failed: EOF
	I1002 21:07:10.520059 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.520832 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.528307 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.532662 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:10.999392 1273271 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:10.999468 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 21:07:11.008772 1273271 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 21:07:11.008846 1273271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 21:07:11.129987 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 21:07:11.210546 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 21:07:11.222067 1273271 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 21:07:11.222093 1273271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 21:07:11.243106 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:11.246584 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 21:07:11.304723 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 21:07:11.307179 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 21:07:11.322550 1273271 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 21:07:11.322583 1273271 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 21:07:11.336867 1273271 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 21:07:11.336892 1273271 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 21:07:11.367757 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 21:07:11.398189 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 21:07:11.442150 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:07:11.444913 1273271 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 21:07:11.444937 1273271 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 21:07:11.464326 1273271 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 21:07:11.464350 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 21:07:11.475588 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 21:07:11.475615 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 21:07:11.479859 1273271 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 21:07:11.479890 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 21:07:11.484304 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:07:11.528238 1273271 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 21:07:11.528265 1273271 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 21:07:11.622120 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 21:07:11.622144 1273271 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 21:07:11.651926 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 21:07:11.651965 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 21:07:11.671553 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 21:07:11.680047 1273271 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 21:07:11.680072 1273271 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 21:07:11.776615 1273271 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 21:07:11.776641 1273271 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 21:07:11.788447 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 21:07:11.788473 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 21:07:11.877313 1273271 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 21:07:11.877357 1273271 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 21:07:11.901739 1273271 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 21:07:11.901778 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 21:07:11.965107 1273271 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 21:07:11.965130 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 21:07:11.967119 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 21:07:11.967158 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 21:07:12.048810 1273271 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.711716204s)
	I1002 21:07:12.049716 1273271 node_ready.go:35] waiting up to 6m0s for node "addons-806706" to be "Ready" ...
	I1002 21:07:12.050249 1273271 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.808908513s)
	I1002 21:07:12.050272 1273271 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 21:07:12.062741 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 21:07:12.095087 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 21:07:12.154680 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 21:07:12.207863 1273271 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 21:07:12.207938 1273271 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 21:07:12.300769 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.170746365s)
	I1002 21:07:12.494517 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 21:07:12.494593 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 21:07:12.561962 1273271 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-806706" context rescaled to 1 replicas
	I1002 21:07:12.796435 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 21:07:12.796511 1273271 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 21:07:12.921352 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 21:07:12.921425 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 21:07:13.033734 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 21:07:13.033799 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 21:07:13.218598 1273271 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 21:07:13.218670 1273271 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 21:07:13.478718 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1002 21:07:14.082903 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:15.922075 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.711493081s)
	I1002 21:07:15.922150 1273271 addons.go:479] Verifying addon ingress=true in "addons-806706"
	I1002 21:07:15.922177 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.679041235s)
	W1002 21:07:15.922218 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:15.922239 1273271 retry.go:31] will retry after 263.658332ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:15.922282 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.675676205s)
	I1002 21:07:15.922396 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.554616841s)
	I1002 21:07:15.922612 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.524390329s)
	I1002 21:07:15.922661 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.480483305s)
	I1002 21:07:15.922807 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.438472211s)
	I1002 21:07:15.922852 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.251274314s)
	I1002 21:07:15.922867 1273271 addons.go:479] Verifying addon registry=true in "addons-806706"
	I1002 21:07:15.922325 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.617580567s)
	I1002 21:07:15.922351 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.615151601s)
	I1002 21:07:15.923288 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.860513825s)
	I1002 21:07:15.923314 1273271 addons.go:479] Verifying addon metrics-server=true in "addons-806706"
	I1002 21:07:15.923401 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.82822872s)
	W1002 21:07:15.923422 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 21:07:15.923434 1273271 retry.go:31] will retry after 127.069562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 21:07:15.923477 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.76872653s)
	I1002 21:07:15.925483 1273271 out.go:179] * Verifying registry addon...
	I1002 21:07:15.925523 1273271 out.go:179] * Verifying ingress addon...
	I1002 21:07:15.929452 1273271 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-806706 service yakd-dashboard -n yakd-dashboard
	
	I1002 21:07:15.930208 1273271 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 21:07:15.931007 1273271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 21:07:15.937619 1273271 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 21:07:15.937644 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:15.939294 1273271 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 21:07:15.939317 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 21:07:15.940231 1273271 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1002 21:07:16.050735 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 21:07:16.186575 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:16.448179 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:16.450635 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.971808234s)
	I1002 21:07:16.450667 1273271 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-806706"
	I1002 21:07:16.453825 1273271 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 21:07:16.457164 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:16.457771 1273271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 21:07:16.469441 1273271 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 21:07:16.469466 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:16.554220 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:16.936067 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:16.938482 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:16.962498 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:17.434964 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:17.435219 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:17.461022 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:17.810241 1273271 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 21:07:17.810346 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:17.837053 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:17.936838 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:17.937406 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:17.946685 1273271 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 21:07:17.962139 1273271 addons.go:238] Setting addon gcp-auth=true in "addons-806706"
	I1002 21:07:17.962183 1273271 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:07:17.962619 1273271 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:07:17.963390 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:17.980402 1273271 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 21:07:17.980464 1273271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:07:17.998361 1273271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:07:18.435543 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:18.435789 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:18.460790 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:18.935028 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:18.935437 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:18.944842 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.89404948s)
	I1002 21:07:18.944919 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.75826717s)
	W1002 21:07:18.944951 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:18.944973 1273271 retry.go:31] will retry after 370.220878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:18.947973 1273271 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 21:07:18.950838 1273271 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 21:07:18.953646 1273271 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 21:07:18.953677 1273271 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 21:07:18.961729 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:18.972133 1273271 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 21:07:18.972214 1273271 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 21:07:18.986019 1273271 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 21:07:18.986106 1273271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 21:07:19.000019 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1002 21:07:19.053851 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:19.316408 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:19.439014 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:19.440050 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:19.467273 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:19.575428 1273271 addons.go:479] Verifying addon gcp-auth=true in "addons-806706"
	I1002 21:07:19.578706 1273271 out.go:179] * Verifying gcp-auth addon...
	I1002 21:07:19.582315 1273271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 21:07:19.591969 1273271 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 21:07:19.591994 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:19.936120 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:19.936284 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:19.962204 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:20.085479 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:20.248020 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:20.248054 1273271 retry.go:31] will retry after 650.71335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:20.434680 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:20.434815 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:20.461477 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:20.585509 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:20.899039 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:20.935834 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:20.936838 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:20.961719 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:21.086141 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:21.435251 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:21.435519 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:21.461651 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:21.553831 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:21.585415 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:21.740320 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:21.740352 1273271 retry.go:31] will retry after 469.524684ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:21.933604 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:21.934867 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:21.962367 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:22.085631 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:22.210869 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:22.434760 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:22.435172 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:22.461548 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:22.586197 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:22.934857 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:22.935044 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:22.961640 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:23.034703 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:23.034733 1273271 retry.go:31] will retry after 1.23076577s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:23.085810 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:23.434350 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:23.434498 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:23.461596 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:23.586100 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:23.934659 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:23.934862 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:23.960609 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:24.052529 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:24.085447 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:24.265692 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:24.435088 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:24.435698 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:24.460746 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:24.586314 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:24.934952 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:24.935782 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:24.961387 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:25.078552 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:25.078590 1273271 retry.go:31] will retry after 1.733039225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:25.085535 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:25.434540 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:25.434894 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:25.460603 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:25.585875 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:25.934311 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:25.934449 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:25.961470 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:26.053948 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:26.085707 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:26.434545 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:26.434825 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:26.461502 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:26.585081 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:26.812439 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:26.935927 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:26.936745 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:26.960762 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:27.086254 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:27.435069 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:27.435261 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:27.461472 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:27.585834 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:27.633678 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:27.633710 1273271 retry.go:31] will retry after 1.586831322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:27.934102 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:27.934441 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:27.961836 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:28.085882 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:28.434885 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:28.435371 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:28.461094 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:28.553100 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:28.586364 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:28.934535 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:28.934595 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:28.961385 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:29.085986 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:29.221142 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:29.434838 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:29.435783 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:29.461043 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:29.585659 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:29.935666 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:29.936087 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:29.961447 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:30.066747 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:30.067406 1273271 retry.go:31] will retry after 2.435069948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:30.085499 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:30.434575 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:30.435067 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:30.460755 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:30.585881 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:30.934108 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:30.934448 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:30.961530 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:31.053499 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:31.086308 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:31.433809 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:31.433990 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:31.460885 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:31.585621 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:31.933971 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:31.934624 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:31.962524 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:32.085264 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:32.434192 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:32.435542 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:32.461864 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:32.503251 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:32.585938 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:32.934886 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:32.935193 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:32.961896 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:33.086024 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:33.305039 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:33.305070 1273271 retry.go:31] will retry after 5.195500776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:33.434265 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:33.434611 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:33.461395 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:33.553353 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:33.585515 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:33.934849 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:33.934993 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:33.961894 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:34.086099 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:34.433500 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:34.434559 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:34.461550 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:34.585363 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:34.933926 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:34.934098 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:34.961422 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:35.085574 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:35.434827 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:35.434847 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:35.460814 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:35.553499 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:35.585252 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:35.933091 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:35.933908 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:35.960988 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:36.085731 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:36.434964 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:36.435162 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:36.461183 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:36.585752 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:36.933953 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:36.934338 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:36.961370 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:37.085253 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:37.433813 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:37.433955 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:37.460843 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:37.585411 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:37.934203 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:37.934378 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:37.961869 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:38.053164 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:38.086557 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:38.434398 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:38.434464 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:38.461004 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:38.501394 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:38.585801 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:38.958120 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:38.966795 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:38.967504 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:39.086556 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:39.407833 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:39.407913 1273271 retry.go:31] will retry after 5.402739697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:39.433546 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:39.434161 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:39.460919 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:39.585271 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:39.933637 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:39.933823 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:39.962100 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:40.086378 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:40.433508 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:40.434615 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:40.461833 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:40.552832 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:40.585570 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:40.934059 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:40.934452 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:40.962312 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:41.085766 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:41.433885 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:41.434331 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:41.461318 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:41.585393 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:41.933977 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:41.934139 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:41.961873 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:42.086566 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:42.433884 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:42.434129 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:42.471855 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:42.553265 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:42.586117 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:42.934285 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:42.934602 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:42.962489 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:43.086532 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:43.434217 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:43.434482 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:43.461617 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:43.585940 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:43.933610 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:43.933859 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:43.960669 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:44.086159 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:44.433938 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:44.434001 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:44.460763 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:44.585606 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:44.811674 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:07:44.935273 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:44.935853 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:44.962130 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:45.061322 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:45.086008 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:45.435675 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:45.436406 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:45.463877 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:45.585711 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 21:07:45.802355 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:45.802393 1273271 retry.go:31] will retry after 18.199074495s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:07:45.933920 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:45.934097 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:45.962333 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:46.086418 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:46.434152 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:46.434214 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:46.461141 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:46.586091 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:46.932980 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:46.933545 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:46.961219 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:47.086002 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:47.433457 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:47.434754 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:47.461758 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:47.552580 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:47.585353 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:47.933972 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:47.933985 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:47.960745 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:48.085673 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:48.433787 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:48.435091 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:48.460641 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:48.585264 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:48.933467 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:48.933910 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:48.961381 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:49.086122 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:49.433208 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:49.433467 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:49.461801 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 21:07:49.552910 1273271 node_ready.go:57] node "addons-806706" has "Ready":"False" status (will retry)
	I1002 21:07:49.585606 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:49.933930 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:49.934230 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:49.961782 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:50.086017 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:50.434475 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:50.434529 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:50.461706 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:50.585113 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:50.934806 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:50.935321 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:50.961760 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:51.086113 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:51.452027 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:51.453686 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:51.558120 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:51.594538 1273271 node_ready.go:49] node "addons-806706" is "Ready"
	I1002 21:07:51.594572 1273271 node_ready.go:38] duration metric: took 39.544819954s for node "addons-806706" to be "Ready" ...
	I1002 21:07:51.594586 1273271 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:07:51.594654 1273271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:07:51.598271 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:51.617604 1273271 api_server.go:72] duration metric: took 41.888712651s to wait for apiserver process to appear ...
	I1002 21:07:51.617629 1273271 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:07:51.617649 1273271 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 21:07:51.639019 1273271 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 21:07:51.646107 1273271 api_server.go:141] control plane version: v1.34.1
	I1002 21:07:51.646139 1273271 api_server.go:131] duration metric: took 28.503568ms to wait for apiserver health ...
	I1002 21:07:51.646148 1273271 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:07:51.664630 1273271 system_pods.go:59] 19 kube-system pods found
	I1002 21:07:51.664715 1273271 system_pods.go:61] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending
	I1002 21:07:51.664737 1273271 system_pods.go:61] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending
	I1002 21:07:51.664758 1273271 system_pods.go:61] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending
	I1002 21:07:51.664793 1273271 system_pods.go:61] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending
	I1002 21:07:51.664818 1273271 system_pods.go:61] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:51.664840 1273271 system_pods.go:61] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:51.664876 1273271 system_pods.go:61] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:51.664899 1273271 system_pods.go:61] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:51.664922 1273271 system_pods.go:61] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:51.664957 1273271 system_pods.go:61] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:51.664982 1273271 system_pods.go:61] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:51.665001 1273271 system_pods.go:61] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending
	I1002 21:07:51.665038 1273271 system_pods.go:61] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending
	I1002 21:07:51.665061 1273271 system_pods.go:61] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending
	I1002 21:07:51.665081 1273271 system_pods.go:61] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending
	I1002 21:07:51.665112 1273271 system_pods.go:61] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending
	I1002 21:07:51.665133 1273271 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending
	I1002 21:07:51.665151 1273271 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending
	I1002 21:07:51.665175 1273271 system_pods.go:61] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:07:51.665209 1273271 system_pods.go:74] duration metric: took 19.05453ms to wait for pod list to return data ...
	I1002 21:07:51.665231 1273271 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:07:51.673827 1273271 default_sa.go:45] found service account: "default"
	I1002 21:07:51.673918 1273271 default_sa.go:55] duration metric: took 8.665447ms for default service account to be created ...
	I1002 21:07:51.673944 1273271 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:07:51.679620 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:51.679734 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending
	I1002 21:07:51.679757 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending
	I1002 21:07:51.679776 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending
	I1002 21:07:51.679811 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending
	I1002 21:07:51.679836 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:51.679857 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:51.679895 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:51.679918 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:51.679940 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:51.679972 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:51.680000 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:51.680020 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending
	I1002 21:07:51.680053 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending
	I1002 21:07:51.680074 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending
	I1002 21:07:51.680092 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending
	I1002 21:07:51.680112 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending
	I1002 21:07:51.680144 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending
	I1002 21:07:51.680162 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending
	I1002 21:07:51.680184 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:07:51.680230 1273271 retry.go:31] will retry after 197.900028ms: missing components: kube-dns
	I1002 21:07:51.931125 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:51.931206 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:07:51.931230 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending
	I1002 21:07:51.931270 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:07:51.931292 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending
	I1002 21:07:51.931313 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:51.931331 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:51.931362 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:51.931386 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:51.931407 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:51.931442 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:51.931468 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:51.931486 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending
	I1002 21:07:51.931521 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending
	I1002 21:07:51.931545 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:07:51.931562 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending
	I1002 21:07:51.931597 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending
	I1002 21:07:51.931618 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending
	I1002 21:07:51.931636 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending
	I1002 21:07:51.931656 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:07:51.931906 1273271 retry.go:31] will retry after 257.404003ms: missing components: kube-dns
	I1002 21:07:51.944503 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:51.945234 1273271 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 21:07:51.945251 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:51.966275 1273271 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 21:07:51.966299 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:52.091534 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:52.194857 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:52.194895 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:07:52.194905 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 21:07:52.194913 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:07:52.194920 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 21:07:52.194924 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:52.194930 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:52.194936 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:52.194944 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:52.194950 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:52.194959 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:52.194964 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:52.194972 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 21:07:52.194983 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 21:07:52.194994 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:07:52.195004 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 21:07:52.195008 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending
	I1002 21:07:52.195015 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.195022 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.195027 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:07:52.195045 1273271 retry.go:31] will retry after 398.554495ms: missing components: kube-dns
	I1002 21:07:52.436222 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:52.437591 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:52.553750 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:52.589857 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:52.602627 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:52.602672 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:07:52.602682 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 21:07:52.602690 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:07:52.602696 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 21:07:52.602701 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:52.602707 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:52.602715 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:52.602725 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:52.602746 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:52.602762 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:52.602767 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:52.602777 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 21:07:52.602788 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 21:07:52.602798 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:07:52.602804 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 21:07:52.602822 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 21:07:52.602832 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.602843 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.602848 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Running
	I1002 21:07:52.602868 1273271 retry.go:31] will retry after 381.418125ms: missing components: kube-dns
	I1002 21:07:52.936588 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:52.937048 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:52.961695 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:52.991005 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:52.991048 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:07:52.991066 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 21:07:52.991074 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:07:52.991086 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 21:07:52.991097 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:52.991103 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:52.991107 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:52.991124 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:52.991139 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:52.991143 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:52.991153 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:52.991159 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 21:07:52.991166 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 21:07:52.991180 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:07:52.991201 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 21:07:52.991213 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 21:07:52.991219 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.991225 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:52.991234 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Running
	I1002 21:07:52.991250 1273271 retry.go:31] will retry after 659.745459ms: missing components: kube-dns
	I1002 21:07:53.087137 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:53.436921 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:53.437809 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:53.461845 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:53.586454 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:53.677078 1273271 system_pods.go:86] 19 kube-system pods found
	I1002 21:07:53.677116 1273271 system_pods.go:89] "coredns-66bc5c9577-pr27b" [d3723195-4c36-4436-a558-1a2acd65d071] Running
	I1002 21:07:53.677127 1273271 system_pods.go:89] "csi-hostpath-attacher-0" [35bbf4c2-0914-417d-9072-66ae5067410f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 21:07:53.677135 1273271 system_pods.go:89] "csi-hostpath-resizer-0" [e9b95ab9-9310-4447-be54-90f194063feb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:07:53.677145 1273271 system_pods.go:89] "csi-hostpathplugin-r7mrn" [06649c88-11e0-4b02-9b43-dda7df7eb632] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 21:07:53.677149 1273271 system_pods.go:89] "etcd-addons-806706" [958ffd04-8055-4d8b-8155-519f68f181fc] Running
	I1002 21:07:53.677154 1273271 system_pods.go:89] "kindnet-ssl2c" [8496f12f-ae28-480d-b229-54db6d75f792] Running
	I1002 21:07:53.677159 1273271 system_pods.go:89] "kube-apiserver-addons-806706" [c3c89d97-8f21-43b4-afbc-7617b4da6593] Running
	I1002 21:07:53.677168 1273271 system_pods.go:89] "kube-controller-manager-addons-806706" [899cf097-e45c-4185-8f66-b677afe4c469] Running
	I1002 21:07:53.677176 1273271 system_pods.go:89] "kube-ingress-dns-minikube" [cfb4cb08-2dcc-40e9-899d-42bb6c70c5a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:07:53.677185 1273271 system_pods.go:89] "kube-proxy-8gptp" [98059a85-3226-4598-a3cc-65e6c5dc1033] Running
	I1002 21:07:53.677189 1273271 system_pods.go:89] "kube-scheduler-addons-806706" [b082cda9-2577-4358-9816-a4196468e1a2] Running
	I1002 21:07:53.677195 1273271 system_pods.go:89] "metrics-server-85b7d694d7-wbgcl" [cf15a15f-7fde-4b00-b31b-cd99e5ffa693] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 21:07:53.677202 1273271 system_pods.go:89] "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 21:07:53.677210 1273271 system_pods.go:89] "registry-66898fdd98-wlkhd" [56823b8d-65e3-4f80-97e3-c3805c3fb28f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:07:53.677217 1273271 system_pods.go:89] "registry-creds-764b6fb674-v22d7" [a75a4bb4-6459-4c36-9ae4-df35421d3a30] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 21:07:53.677229 1273271 system_pods.go:89] "registry-proxy-z5g9b" [e8330242-08a2-49f7-acae-329757c8f978] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 21:07:53.677236 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ms2zp" [a93e062a-e2e6-4c2f-a135-1b8f45a78a92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:53.677242 1273271 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rbvm4" [478068b9-ce28-4a8d-a860-1c9d19f93627] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:07:53.677250 1273271 system_pods.go:89] "storage-provisioner" [c076c6c6-96ed-45ea-b807-0126f53fd454] Running
	I1002 21:07:53.677260 1273271 system_pods.go:126] duration metric: took 2.003296946s to wait for k8s-apps to be running ...
	I1002 21:07:53.677272 1273271 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:07:53.677374 1273271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:07:53.695228 1273271 system_svc.go:56] duration metric: took 17.945251ms WaitForService to wait for kubelet
	I1002 21:07:53.695270 1273271 kubeadm.go:586] duration metric: took 43.96638259s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:07:53.695291 1273271 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:07:53.698813 1273271 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:07:53.698851 1273271 node_conditions.go:123] node cpu capacity is 2
	I1002 21:07:53.698864 1273271 node_conditions.go:105] duration metric: took 3.567692ms to run NodePressure ...
	I1002 21:07:53.698879 1273271 start.go:241] waiting for startup goroutines ...
	I1002 21:07:53.935622 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:53.936408 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:53.962423 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:54.089341 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:54.435173 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:54.435424 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:54.461449 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:54.585780 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:54.935836 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:54.936150 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:54.962024 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:55.086486 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:55.435967 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:55.436843 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:55.461725 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:55.586236 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:55.935807 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:55.936985 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:55.961388 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:56.085964 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:56.436172 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:56.437135 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:56.462508 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:56.586251 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:56.933921 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:56.935085 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:56.961650 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:57.085751 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:57.434904 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:57.435085 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:57.461020 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:57.585836 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:57.937264 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:57.937421 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:57.962401 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:58.085634 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:58.433851 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:58.434285 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:58.461299 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:58.586150 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:58.935784 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:58.936227 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:58.962099 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:59.086510 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:59.435427 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:59.435732 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:59.461315 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:07:59.586606 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:07:59.935657 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:07:59.935991 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:07:59.961347 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:00.094823 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:00.436630 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:00.440246 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:00.537712 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:00.589105 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:00.936275 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:00.939310 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:00.961830 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:01.085956 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:01.434475 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:01.435954 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:01.461429 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:01.585480 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:01.934848 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:01.936751 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:01.961355 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:02.085596 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:02.434562 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:02.434645 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:02.462165 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:02.586130 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:02.936281 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:02.936890 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:02.961128 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:03.089851 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:03.433559 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:03.434333 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:03.461597 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:03.586155 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:03.935759 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:03.936151 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:03.962404 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:04.002486 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:08:04.085422 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:04.435503 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:04.435937 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:04.461215 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:04.585838 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:04.935214 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:04.935251 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:04.961296 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:05.087612 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:05.232323 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.229794664s)
	W1002 21:08:05.232360 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:08:05.232392 1273271 retry.go:31] will retry after 21.927356605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:08:05.437472 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:05.437579 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:05.461514 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:05.585233 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:05.933700 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:05.933885 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:05.961179 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:06.086109 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:06.433489 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:06.434505 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:06.464644 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:06.586607 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:06.936555 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:06.937081 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:06.961722 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:07.087512 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:07.440296 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:07.440531 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:07.463813 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:07.586597 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:07.935897 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:07.936141 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:07.963000 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:08.086271 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:08.433669 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:08.434325 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:08.461705 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:08.585849 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:08.934986 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:08.935502 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:08.962431 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:09.085416 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:09.434715 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:09.435128 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:09.461286 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:09.585251 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:09.935779 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:09.935946 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:09.961910 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:10.086654 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:10.435293 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:10.436495 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:10.461638 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:10.585719 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:10.936253 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:10.936358 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:10.961853 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:11.086401 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:11.435333 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:11.436117 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:11.461941 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:11.586141 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:11.933105 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:11.935094 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:11.961806 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:12.085851 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:12.434869 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:12.435200 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:12.461513 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:12.586132 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:12.936070 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:12.936441 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:12.962451 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:13.086400 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:13.435927 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:13.436372 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:13.461264 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:13.584893 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:13.935234 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:13.935667 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:13.962212 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:14.085509 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:14.433572 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:14.434981 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:14.462087 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:14.586067 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:14.934840 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:14.934973 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:14.960929 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:15.085783 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:15.435353 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:15.435554 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:15.462123 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:15.585589 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:15.937855 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:15.938165 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:15.961618 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:16.085932 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:16.432883 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:16.434758 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:16.460812 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:16.585572 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:16.935716 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:16.936139 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:16.962478 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:17.085510 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:17.436248 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:17.436392 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:17.464981 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:17.586306 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:17.934174 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:17.934269 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:17.961495 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:18.085289 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:18.435023 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:18.436117 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:18.461851 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:18.586304 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:18.935122 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:18.935413 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:18.962294 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:19.085876 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:19.435022 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:19.435458 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:19.461466 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:19.585443 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:19.936048 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:19.936431 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:19.962207 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:20.085986 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:20.435550 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:20.436261 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:20.461569 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:20.585170 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:20.936339 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:20.936880 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:20.963714 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:21.085710 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:21.434942 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:21.435100 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:21.461048 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:21.585963 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:21.936407 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:21.936809 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:21.961206 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:22.085630 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:22.434772 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:22.435051 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:22.461225 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:22.585010 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:22.934527 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:22.934543 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:22.961884 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:23.086270 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:23.433792 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:23.436089 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:23.461299 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:23.586063 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:23.933782 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:23.934014 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:23.960999 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:24.086454 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:24.435801 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:24.436382 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:24.461704 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:24.585532 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:24.935200 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:24.935325 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:24.964825 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:25.085936 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:25.438178 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:25.438751 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:25.461753 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:25.586340 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:25.935308 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:25.935738 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:25.961837 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:26.087334 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:26.435665 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:26.436140 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:26.460940 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:26.586429 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:26.935644 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:26.936206 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:26.962687 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:27.086554 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:27.160867 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 21:08:27.435115 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:27.435287 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:27.461345 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:27.585276 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:27.939012 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:27.939195 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:28.039185 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:28.086480 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:28.433271 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:28.435201 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:28.437614 1273271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.276713519s)
	W1002 21:08:28.437685 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:08:28.437718 1273271 retry.go:31] will retry after 42.484158576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:08:28.461235 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:28.586057 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:28.935061 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:28.935160 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:28.961437 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:29.086099 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:29.437724 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:08:29.437980 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:29.460922 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:29.585729 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:29.945792 1273271 kapi.go:107] duration metric: took 1m14.014779115s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 21:08:29.946312 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:29.961562 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:30.086013 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:30.434015 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:30.535289 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:30.585944 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:30.934584 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:30.961415 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:31.085771 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:31.434208 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:31.461429 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:31.585531 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:31.934197 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:31.961705 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:32.086804 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:32.434439 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:32.462932 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:32.585993 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:32.948916 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:32.969522 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:33.085919 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:33.433046 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:33.461630 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:33.586179 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:33.934561 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:33.961858 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:34.086699 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:34.434680 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:34.461289 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:34.585597 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:34.934539 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:34.961388 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:35.086112 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:35.434063 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:35.461520 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:35.585568 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:35.935045 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:35.962575 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:36.086081 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:36.434638 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:36.462260 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:36.586447 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:36.943362 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:36.981528 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:37.086061 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:37.433958 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:37.474403 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:37.586385 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:37.933577 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:37.961741 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:38.087925 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:38.434598 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:38.461907 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:38.586748 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:38.938154 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:38.963083 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:39.086323 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:39.434819 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:39.461461 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:39.586367 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:39.940729 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:39.962502 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:40.086645 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:40.434353 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:40.461522 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:40.585871 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:40.934410 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:40.961532 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:41.086270 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:41.434074 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:41.461266 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:41.585155 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:41.952452 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:41.962615 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:42.086466 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:42.434290 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:42.461345 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:42.585318 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:42.934251 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:42.961541 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:43.086662 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:43.434222 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:43.461518 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:43.585662 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:43.934912 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:43.960957 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:44.086700 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:44.434078 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:44.461344 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:44.588080 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:44.936421 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:44.962149 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:45.091158 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:45.433905 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:45.463091 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:45.587861 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:45.934003 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:45.961124 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:46.085508 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:46.433563 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:46.462184 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:46.586783 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:46.934470 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:46.962023 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:47.085937 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:47.436095 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:47.461386 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:47.585488 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:47.940078 1273271 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:08:48.035238 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:48.086179 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:48.433386 1273271 kapi.go:107] duration metric: took 1m32.503175872s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 21:08:48.461565 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:48.586166 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:48.961827 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:49.090305 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:49.461980 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:49.585792 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:49.967892 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:50.086235 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:50.462091 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:50.586328 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:08:50.965666 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:51.091329 1273271 kapi.go:107] duration metric: took 1m31.509013187s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 21:08:51.094311 1273271 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-806706 cluster.
	I1002 21:08:51.097720 1273271 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 21:08:51.103013 1273271 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 21:08:51.461617 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:51.962923 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:52.465117 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:52.961449 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:53.462055 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:53.961600 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:54.463986 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:54.976419 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:55.463354 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:55.963530 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:56.461539 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:56.961216 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:57.464496 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:57.971108 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:58.461875 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:58.962795 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:59.463317 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:08:59.977176 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:09:00.463115 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:09:00.961672 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:09:01.461626 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:09:01.961410 1273271 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:09:02.462305 1273271 kapi.go:107] duration metric: took 1m46.004529999s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 21:09:10.922166 1273271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 21:09:11.735248 1273271 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:09:11.735348 1273271 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:09:11.738376 1273271 out.go:179] * Enabled addons: registry-creds, cloud-spanner, ingress-dns, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1002 21:09:11.741160 1273271 addons.go:514] duration metric: took 2m2.011439269s for enable addons: enabled=[registry-creds cloud-spanner ingress-dns storage-provisioner amd-gpu-device-plugin nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1002 21:09:11.741207 1273271 start.go:246] waiting for cluster config update ...
	I1002 21:09:11.741227 1273271 start.go:255] writing updated cluster config ...
	I1002 21:09:11.741518 1273271 ssh_runner.go:195] Run: rm -f paused
	I1002 21:09:11.745229 1273271 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:09:11.748918 1273271 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.755464 1273271 pod_ready.go:94] pod "coredns-66bc5c9577-pr27b" is "Ready"
	I1002 21:09:11.755488 1273271 pod_ready.go:86] duration metric: took 6.542479ms for pod "coredns-66bc5c9577-pr27b" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.757578 1273271 pod_ready.go:83] waiting for pod "etcd-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.762015 1273271 pod_ready.go:94] pod "etcd-addons-806706" is "Ready"
	I1002 21:09:11.762069 1273271 pod_ready.go:86] duration metric: took 4.471077ms for pod "etcd-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.764260 1273271 pod_ready.go:83] waiting for pod "kube-apiserver-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.768883 1273271 pod_ready.go:94] pod "kube-apiserver-addons-806706" is "Ready"
	I1002 21:09:11.768907 1273271 pod_ready.go:86] duration metric: took 4.616903ms for pod "kube-apiserver-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:11.771389 1273271 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:12.149512 1273271 pod_ready.go:94] pod "kube-controller-manager-addons-806706" is "Ready"
	I1002 21:09:12.149539 1273271 pod_ready.go:86] duration metric: took 378.124844ms for pod "kube-controller-manager-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:12.349553 1273271 pod_ready.go:83] waiting for pod "kube-proxy-8gptp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:12.749548 1273271 pod_ready.go:94] pod "kube-proxy-8gptp" is "Ready"
	I1002 21:09:12.749628 1273271 pod_ready.go:86] duration metric: took 400.046684ms for pod "kube-proxy-8gptp" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:12.949745 1273271 pod_ready.go:83] waiting for pod "kube-scheduler-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:13.349613 1273271 pod_ready.go:94] pod "kube-scheduler-addons-806706" is "Ready"
	I1002 21:09:13.349643 1273271 pod_ready.go:86] duration metric: took 399.871935ms for pod "kube-scheduler-addons-806706" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:09:13.349658 1273271 pod_ready.go:40] duration metric: took 1.604394828s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:09:13.409644 1273271 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:09:13.412871 1273271 out.go:179] * Done! kubectl is now configured to use "addons-806706" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.49554697Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b Namespace:local-path-storage ID:0408cf930406bc6713471c1f7a7daf47e45bc825ad2833c1dee3e18de61944a4 UID:ba4d854f-2ac5-4457-923f-32598a6e27fd NetNS:/var/run/netns/483e803f-12f3-4fde-953f-f3c208e16d36 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b24e20}] Aliases:map[]}"
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.495744521Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b for CNI network kindnet (type=ptp)"
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.507734509Z" level=info msg="Ran pod sandbox 0408cf930406bc6713471c1f7a7daf47e45bc825ad2833c1dee3e18de61944a4 with infra container: local-path-storage/helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b/POD" id=b8d540ed-a2c6-4f66-ba60-8fa4b68aef35 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.511012903Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=5aef64d5-a3b1-4dcf-9b0a-4f1ce4a3ebcd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.51520949Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=52715920-a1f1-47e0-80ba-5c7f667f86d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.526493735Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b/helper-pod" id=9077d797-c764-4fc6-8bac-717be3264233 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.527647198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.538295443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.538812119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.55811531Z" level=info msg="Created container 0cd1221c6c63095c5e39e1b2a8d1430e802efb9fd7f4566e364d451c53b30635: local-path-storage/helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b/helper-pod" id=9077d797-c764-4fc6-8bac-717be3264233 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.560252754Z" level=info msg="Starting container: 0cd1221c6c63095c5e39e1b2a8d1430e802efb9fd7f4566e364d451c53b30635" id=9f6fedc6-3ae0-44a9-882b-0c5850f335ea name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:09:56 addons-806706 crio[832]: time="2025-10-02T21:09:56.562244338Z" level=info msg="Started container" PID=5560 containerID=0cd1221c6c63095c5e39e1b2a8d1430e802efb9fd7f4566e364d451c53b30635 description=local-path-storage/helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b/helper-pod id=9f6fedc6-3ae0-44a9-882b-0c5850f335ea name=/runtime.v1.RuntimeService/StartContainer sandboxID=0408cf930406bc6713471c1f7a7daf47e45bc825ad2833c1dee3e18de61944a4
	Oct 02 21:09:58 addons-806706 crio[832]: time="2025-10-02T21:09:58.276912691Z" level=info msg="Stopping pod sandbox: 0408cf930406bc6713471c1f7a7daf47e45bc825ad2833c1dee3e18de61944a4" id=d7418b7f-123b-47f3-85bf-410318726225 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 21:09:58 addons-806706 crio[832]: time="2025-10-02T21:09:58.277205674Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b Namespace:local-path-storage ID:0408cf930406bc6713471c1f7a7daf47e45bc825ad2833c1dee3e18de61944a4 UID:ba4d854f-2ac5-4457-923f-32598a6e27fd NetNS:/var/run/netns/483e803f-12f3-4fde-953f-f3c208e16d36 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049ba80}] Aliases:map[]}"
	Oct 02 21:09:58 addons-806706 crio[832]: time="2025-10-02T21:09:58.277339825Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b from CNI network \"kindnet\" (type=ptp)"
	Oct 02 21:09:58 addons-806706 crio[832]: time="2025-10-02T21:09:58.297891042Z" level=info msg="Stopped pod sandbox: 0408cf930406bc6713471c1f7a7daf47e45bc825ad2833c1dee3e18de61944a4" id=d7418b7f-123b-47f3-85bf-410318726225 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 21:10:04 addons-806706 crio[832]: time="2025-10-02T21:10:04.386368546Z" level=info msg="Running pod sandbox: default/task-pv-pod-restore/POD" id=2281011f-1527-4131-ba81-3df23e2ee2a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:10:04 addons-806706 crio[832]: time="2025-10-02T21:10:04.386441743Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:10:04 addons-806706 crio[832]: time="2025-10-02T21:10:04.406918212Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:a45c156e98a289cb029cb0f9a1a212b57c37a8a0c7b81fd77c79a7f07793e6a9 UID:17880673-f81c-4abe-8af0-1d693004da3d NetNS:/var/run/netns/b2c57761-8c2d-4f58-95b3-0065aa8b52f4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b242c0}] Aliases:map[]}"
	Oct 02 21:10:04 addons-806706 crio[832]: time="2025-10-02T21:10:04.406962281Z" level=info msg="Adding pod default_task-pv-pod-restore to CNI network \"kindnet\" (type=ptp)"
	Oct 02 21:10:04 addons-806706 crio[832]: time="2025-10-02T21:10:04.421329289Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:a45c156e98a289cb029cb0f9a1a212b57c37a8a0c7b81fd77c79a7f07793e6a9 UID:17880673-f81c-4abe-8af0-1d693004da3d NetNS:/var/run/netns/b2c57761-8c2d-4f58-95b3-0065aa8b52f4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001b242c0}] Aliases:map[]}"
	Oct 02 21:10:04 addons-806706 crio[832]: time="2025-10-02T21:10:04.421747554Z" level=info msg="Checking pod default_task-pv-pod-restore for CNI network kindnet (type=ptp)"
	Oct 02 21:10:04 addons-806706 crio[832]: time="2025-10-02T21:10:04.431030968Z" level=info msg="Ran pod sandbox a45c156e98a289cb029cb0f9a1a212b57c37a8a0c7b81fd77c79a7f07793e6a9 with infra container: default/task-pv-pod-restore/POD" id=2281011f-1527-4131-ba81-3df23e2ee2a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:10:04 addons-806706 crio[832]: time="2025-10-02T21:10:04.435782909Z" level=info msg="Pulling image: docker.io/nginx:latest" id=40ba141f-92c9-4b6f-9e45-500fc1e28c21 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:10:04 addons-806706 crio[832]: time="2025-10-02T21:10:04.439262522Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	0cd1221c6c630       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             8 seconds ago        Exited              helper-pod                               0                   0408cf930406b       helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b   local-path-storage
	4d3733585d723       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            11 seconds ago       Exited              busybox                                  0                   ae12bd367b103       test-local-path                                              default
	d5e86468e4d42       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            15 seconds ago       Exited              helper-pod                               0                   d71acd43efe51       helper-pod-create-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b   local-path-storage
	d35cf495129f3       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          47 seconds ago       Running             busybox                                  0                   e988ad414eba0       busybox                                                      default
	c49402c09d33e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          About a minute ago   Running             csi-snapshotter                          0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                                     kube-system
	c8b471a8c4840       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          About a minute ago   Running             csi-provisioner                          0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                                     kube-system
	eac22bc4229e1       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago   Running             liveness-probe                           0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                                     kube-system
	2c7540d82769e       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                                     kube-system
	3da90336e0b30       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                                     kube-system
	633694731e206       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            About a minute ago   Running             gadget                                   0                   b349b665e4124       gadget-jmfns                                                 gadget
	95f9be3e8c94e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 About a minute ago   Running             gcp-auth                                 0                   3f6f92d72955c       gcp-auth-78565c9fb4-x9mnx                                    gcp-auth
	f4df7e24e8366       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             About a minute ago   Running             controller                               0                   70e6b7b8023c6       ingress-nginx-controller-9cc49f96f-h8sxf                     ingress-nginx
	3ebf774e4e0f2       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   5f2ff5b3c7b54       csi-hostpath-attacher-0                                      kube-system
	7f5b74ab76d02       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   c503a0059a533       snapshot-controller-7d9fbc56b8-ms2zp                         kube-system
	6d53e2adbcf08       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   3cf44a22a134c       yakd-dashboard-5ff678cb9-ldmgf                               yakd-dashboard
	05feced20bbf8       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               About a minute ago   Running             cloud-spanner-emulator                   0                   1f09d4910b7ff       cloud-spanner-emulator-85f6b7fc65-l4pxd                      default
	54466abff97b0       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   2f0e21390145b       local-path-provisioner-648f6765c9-9pqrj                      local-path-storage
	a9334e6a8404e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   a038d6601e093       registry-proxy-z5g9b                                         kube-system
	c8c35f4ab1b00       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   About a minute ago   Exited              patch                                    0                   da5e43e682b25       ingress-nginx-admission-patch-4cbvw                          ingress-nginx
	d663f5ea76e43       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   1256bb536b2eb       metrics-server-85b7d694d7-wbgcl                              kube-system
	6678f7b494598       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   f775cf4077c93       snapshot-controller-7d9fbc56b8-rbvm4                         kube-system
	9fdc4fbb9694b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   a4e1f58b3d9be       csi-hostpathplugin-r7mrn                                     kube-system
	d8a82613cbbf7       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           About a minute ago   Running             registry                                 0                   8039a124c8a99       registry-66898fdd98-wlkhd                                    kube-system
	a3db0f10ee2bd       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   fbb959e2e6dc3       nvidia-device-plugin-daemonset-x2b9d                         kube-system
	da5b4d1a96892       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   About a minute ago   Exited              create                                   0                   70c138bd7bdd8       ingress-nginx-admission-create-r4gc4                         ingress-nginx
	7e58d89c526a1       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   63027a525bee4       csi-hostpath-resizer-0                                       kube-system
	b5c710f4aca28       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               2 minutes ago        Running             minikube-ingress-dns                     0                   aad269cb1f759       kube-ingress-dns-minikube                                    kube-system
	96b63ced4cbec       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             2 minutes ago        Running             coredns                                  0                   40d563d4f9e24       coredns-66bc5c9577-pr27b                                     kube-system
	e06a73003aabb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             2 minutes ago        Running             storage-provisioner                      0                   3d6f692deab7a       storage-provisioner                                          kube-system
	c3c9833a8ac94       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   9d5cc90f7db66       kindnet-ssl2c                                                kube-system
	3ce054faf2e39       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   0f1c32b3805f8       kube-proxy-8gptp                                             kube-system
	d78cdb898250b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             3 minutes ago        Running             etcd                                     0                   4c45159c76a47       etcd-addons-806706                                           kube-system
	edee49f1f2c30       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             3 minutes ago        Running             kube-controller-manager                  0                   7436e82c12abc       kube-controller-manager-addons-806706                        kube-system
	5899705446a85       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             3 minutes ago        Running             kube-scheduler                           0                   42d963a42e386       kube-scheduler-addons-806706                                 kube-system
	e997bf38b55bf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             3 minutes ago        Running             kube-apiserver                           0                   15fab211dd370       kube-apiserver-addons-806706                                 kube-system
	
	
	==> coredns [96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1] <==
	[INFO] 10.244.0.8:53399 - 57026 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001902625s
	[INFO] 10.244.0.8:53399 - 13460 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000183324s
	[INFO] 10.244.0.8:53399 - 30409 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000176948s
	[INFO] 10.244.0.8:57863 - 1696 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149708s
	[INFO] 10.244.0.8:57863 - 1491 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075757s
	[INFO] 10.244.0.8:39269 - 52596 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078801s
	[INFO] 10.244.0.8:39269 - 52391 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000192776s
	[INFO] 10.244.0.8:52532 - 47957 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000125634s
	[INFO] 10.244.0.8:52532 - 47776 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00014266s
	[INFO] 10.244.0.8:47949 - 37240 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001272787s
	[INFO] 10.244.0.8:47949 - 37428 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001381298s
	[INFO] 10.244.0.8:55285 - 46820 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012465s
	[INFO] 10.244.0.8:55285 - 47001 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128366s
	[INFO] 10.244.0.20:39828 - 11535 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276138s
	[INFO] 10.244.0.20:39778 - 12043 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000180452s
	[INFO] 10.244.0.20:56460 - 18671 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079318s
	[INFO] 10.244.0.20:39048 - 26232 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000283564s
	[INFO] 10.244.0.20:53333 - 19997 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000165125s
	[INFO] 10.244.0.20:52896 - 61599 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102095s
	[INFO] 10.244.0.20:34225 - 40566 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002069999s
	[INFO] 10.244.0.20:43159 - 14199 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001978161s
	[INFO] 10.244.0.20:51583 - 16271 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001632625s
	[INFO] 10.244.0.20:49614 - 24774 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001696845s
	[INFO] 10.244.0.24:56897 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000214618s
	[INFO] 10.244.0.24:60900 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000142882s
	
	
	==> describe nodes <==
	Name:               addons-806706
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-806706
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=addons-806706
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_07_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-806706
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-806706"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:07:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-806706
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:09:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:09:58 +0000   Thu, 02 Oct 2025 21:06:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:09:58 +0000   Thu, 02 Oct 2025 21:06:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:09:58 +0000   Thu, 02 Oct 2025 21:06:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:09:58 +0000   Thu, 02 Oct 2025 21:07:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-806706
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdd37abed1cf4461af4aac68f6886a7b
	  System UUID:                97003614-72a7-4911-9ef4-c36e5a51170b
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  default                     cloud-spanner-emulator-85f6b7fc65-l4pxd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         0s
	  gadget                      gadget-jmfns                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  gcp-auth                    gcp-auth-78565c9fb4-x9mnx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-h8sxf    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m49s
	  kube-system                 coredns-66bc5c9577-pr27b                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m54s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  kube-system                 csi-hostpathplugin-r7mrn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 etcd-addons-806706                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m59s
	  kube-system                 kindnet-ssl2c                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m55s
	  kube-system                 kube-apiserver-addons-806706                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m59s
	  kube-system                 kube-controller-manager-addons-806706       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m59s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  kube-system                 kube-proxy-8gptp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  kube-system                 kube-scheduler-addons-806706                100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 metrics-server-85b7d694d7-wbgcl             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m50s
	  kube-system                 nvidia-device-plugin-daemonset-x2b9d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 registry-66898fdd98-wlkhd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 registry-creds-764b6fb674-v22d7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 registry-proxy-z5g9b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 snapshot-controller-7d9fbc56b8-ms2zp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 snapshot-controller-7d9fbc56b8-rbvm4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  local-path-storage          local-path-provisioner-648f6765c9-9pqrj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-ldmgf              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  3m7s (x8 over 3m7s)  kubelet          Node addons-806706 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m7s (x8 over 3m7s)  kubelet          Node addons-806706 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m7s (x8 over 3m7s)  kubelet          Node addons-806706 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m59s                kubelet          Node addons-806706 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m59s                kubelet          Node addons-806706 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m59s                kubelet          Node addons-806706 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m56s                node-controller  Node addons-806706 event: Registered Node addons-806706 in Controller
	  Normal   NodeReady                2m13s                kubelet          Node addons-806706 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 20:02] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 21:05] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 21:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee] <==
	{"level":"warn","ts":"2025-10-02T21:07:00.201590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.306672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.352657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.404311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.426964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.455400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.494940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.516633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.542308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.581503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.637430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.672454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.710135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.746148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.767979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.788005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:00.879832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59228","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:07:10.309798Z","caller":"traceutil/trace.go:172","msg":"trace[213488489] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"104.375095ms","start":"2025-10-02T21:07:10.205410Z","end":"2025-10-02T21:07:10.309785Z","steps":["trace[213488489] 'process raft request'  (duration: 104.339863ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T21:07:10.310017Z","caller":"traceutil/trace.go:172","msg":"trace[267441551] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"104.887578ms","start":"2025-10-02T21:07:10.205121Z","end":"2025-10-02T21:07:10.310009Z","steps":["trace[267441551] 'process raft request'  (duration: 104.551166ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-02T21:07:16.552367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:16.580785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:38.779139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:38.796366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:38.885761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:07:38.914238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40694","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [95f9be3e8c94e0deefc712e6b63d8a6800487a0d163b134da855cb302a17c6bf] <==
	2025/10/02 21:08:50 GCP Auth Webhook started!
	2025/10/02 21:09:14 Ready to marshal response ...
	2025/10/02 21:09:14 Ready to write response ...
	2025/10/02 21:09:14 Ready to marshal response ...
	2025/10/02 21:09:14 Ready to write response ...
	2025/10/02 21:09:14 Ready to marshal response ...
	2025/10/02 21:09:14 Ready to write response ...
	2025/10/02 21:09:34 Ready to marshal response ...
	2025/10/02 21:09:34 Ready to write response ...
	2025/10/02 21:09:34 Ready to marshal response ...
	2025/10/02 21:09:34 Ready to write response ...
	2025/10/02 21:09:48 Ready to marshal response ...
	2025/10/02 21:09:48 Ready to write response ...
	2025/10/02 21:09:48 Ready to marshal response ...
	2025/10/02 21:09:48 Ready to write response ...
	2025/10/02 21:09:56 Ready to marshal response ...
	2025/10/02 21:09:56 Ready to write response ...
	2025/10/02 21:10:04 Ready to marshal response ...
	2025/10/02 21:10:04 Ready to write response ...
	
	
	==> kernel <==
	 21:10:05 up  5:52,  0 user,  load average: 1.71, 3.05, 3.57
	Linux addons-806706 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0] <==
	I1002 21:08:00.919429       1 main.go:301] handling current node
	I1002 21:08:10.918302       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:08:10.918329       1 main.go:301] handling current node
	I1002 21:08:20.918737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:08:20.918873       1 main.go:301] handling current node
	I1002 21:08:30.919153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:08:30.919184       1 main.go:301] handling current node
	I1002 21:08:40.918295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:08:40.918330       1 main.go:301] handling current node
	I1002 21:08:50.919186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:08:50.919226       1 main.go:301] handling current node
	I1002 21:09:00.919183       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:09:00.919213       1 main.go:301] handling current node
	I1002 21:09:10.918302       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:09:10.918428       1 main.go:301] handling current node
	I1002 21:09:20.918576       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:09:20.918613       1 main.go:301] handling current node
	I1002 21:09:30.926131       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:09:30.926257       1 main.go:301] handling current node
	I1002 21:09:40.918332       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:09:40.918370       1 main.go:301] handling current node
	I1002 21:09:50.919033       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:09:50.919105       1 main.go:301] handling current node
	I1002 21:10:00.919304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:10:00.919440       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c] <==
	W1002 21:07:51.456905       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.230.182:443: connect: connection refused
	E1002 21:07:51.456953       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.230.182:443: connect: connection refused" logger="UnhandledError"
	W1002 21:07:51.462533       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.230.182:443: connect: connection refused
	E1002 21:07:51.462580       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.230.182:443: connect: connection refused" logger="UnhandledError"
	W1002 21:07:51.576571       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.230.182:443: connect: connection refused
	E1002 21:07:51.576616       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.230.182:443: connect: connection refused" logger="UnhandledError"
	W1002 21:08:15.781010       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 21:08:15.781166       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1002 21:08:15.781183       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1002 21:08:15.784781       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 21:08:15.784874       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 21:08:15.784893       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1002 21:08:36.905794       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.167.158:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.167.158:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.167.158:443: connect: connection refused" logger="UnhandledError"
	W1002 21:08:36.905879       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 21:08:36.905930       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1002 21:08:36.993456       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1002 21:09:23.718076       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59186: use of closed network connection
	E1002 21:09:24.013967       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59194: use of closed network connection
	I1002 21:09:45.317478       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd] <==
	I1002 21:07:08.799213       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 21:07:08.799354       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:07:08.799463       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 21:07:08.809270       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 21:07:08.818855       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-806706" podCIDRs=["10.244.0.0/24"]
	I1002 21:07:08.821072       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:07:08.838127       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:07:08.838420       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:07:08.838462       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:07:08.838491       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:07:08.839149       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 21:07:08.839342       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 21:07:08.842121       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:07:08.846597       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 21:07:14.925919       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 21:07:38.771882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 21:07:38.772051       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 21:07:38.772092       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 21:07:38.854332       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 21:07:38.859328       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 21:07:38.872469       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:07:39.059645       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:07:53.985651       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1002 21:08:08.877490       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 21:08:09.067573       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33] <==
	I1002 21:07:10.747352       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:07:10.863715       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:07:10.963951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:07:10.963993       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 21:07:10.964068       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:07:11.050193       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:07:11.050250       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:07:11.059032       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:07:11.059414       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:07:11.059429       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:07:11.061024       1 config.go:200] "Starting service config controller"
	I1002 21:07:11.061036       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:07:11.061068       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:07:11.061072       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:07:11.061088       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:07:11.061092       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:07:11.061798       1 config.go:309] "Starting node config controller"
	I1002 21:07:11.061806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:07:11.061813       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:07:11.162449       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:07:11.162489       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:07:11.162528       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d] <==
	I1002 21:07:01.590329       1 serving.go:386] Generated self-signed cert in-memory
	I1002 21:07:03.817764       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:07:03.817874       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:07:03.823930       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 21:07:03.824073       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 21:07:03.824158       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:07:03.824195       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:07:03.824244       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:07:03.824274       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:07:03.827710       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:07:03.827794       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:07:03.925636       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 21:07:03.925636       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 21:07:03.925653       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 21:09:58 addons-806706 kubelet[1267]: I1002 21:09:58.417977    1267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba4d854f-2ac5-4457-923f-32598a6e27fd-script" (OuterVolumeSpecName: "script") pod "ba4d854f-2ac5-4457-923f-32598a6e27fd" (UID: "ba4d854f-2ac5-4457-923f-32598a6e27fd"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 02 21:09:58 addons-806706 kubelet[1267]: I1002 21:09:58.422319    1267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba4d854f-2ac5-4457-923f-32598a6e27fd-kube-api-access-v9v78" (OuterVolumeSpecName: "kube-api-access-v9v78") pod "ba4d854f-2ac5-4457-923f-32598a6e27fd" (UID: "ba4d854f-2ac5-4457-923f-32598a6e27fd"). InnerVolumeSpecName "kube-api-access-v9v78". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 21:09:58 addons-806706 kubelet[1267]: I1002 21:09:58.518366    1267 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/ba4d854f-2ac5-4457-923f-32598a6e27fd-data\") on node \"addons-806706\" DevicePath \"\""
	Oct 02 21:09:58 addons-806706 kubelet[1267]: I1002 21:09:58.518414    1267 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v9v78\" (UniqueName: \"kubernetes.io/projected/ba4d854f-2ac5-4457-923f-32598a6e27fd-kube-api-access-v9v78\") on node \"addons-806706\" DevicePath \"\""
	Oct 02 21:09:58 addons-806706 kubelet[1267]: I1002 21:09:58.518428    1267 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ba4d854f-2ac5-4457-923f-32598a6e27fd-gcp-creds\") on node \"addons-806706\" DevicePath \"\""
	Oct 02 21:09:58 addons-806706 kubelet[1267]: I1002 21:09:58.518437    1267 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/ba4d854f-2ac5-4457-923f-32598a6e27fd-script\") on node \"addons-806706\" DevicePath \"\""
	Oct 02 21:09:59 addons-806706 kubelet[1267]: I1002 21:09:59.281633    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0408cf930406bc6713471c1f7a7daf47e45bc825ad2833c1dee3e18de61944a4"
	Oct 02 21:09:59 addons-806706 kubelet[1267]: E1002 21:09:59.283480    1267 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b\" is forbidden: User \"system:node:addons-806706\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-806706' and this object" podUID="ba4d854f-2ac5-4457-923f-32598a6e27fd" pod="local-path-storage/helper-pod-delete-pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b"
	Oct 02 21:09:59 addons-806706 kubelet[1267]: E1002 21:09:59.529372    1267 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 02 21:09:59 addons-806706 kubelet[1267]: E1002 21:09:59.529468    1267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a75a4bb4-6459-4c36-9ae4-df35421d3a30-gcr-creds podName:a75a4bb4-6459-4c36-9ae4-df35421d3a30 nodeName:}" failed. No retries permitted until 2025-10-02 21:12:01.52945023 +0000 UTC m=+296.639206788 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a75a4bb4-6459-4c36-9ae4-df35421d3a30-gcr-creds") pod "registry-creds-764b6fb674-v22d7" (UID: "a75a4bb4-6459-4c36-9ae4-df35421d3a30") : secret "registry-creds-gcr" not found
	Oct 02 21:10:01 addons-806706 kubelet[1267]: I1002 21:10:01.107461    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba4d854f-2ac5-4457-923f-32598a6e27fd" path="/var/lib/kubelet/pods/ba4d854f-2ac5-4457-923f-32598a6e27fd/volumes"
	Oct 02 21:10:04 addons-806706 kubelet[1267]: I1002 21:10:04.184009    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38f7c3cb-e77c-4b6c-83e0-1ad6bc4089b1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^29eba6d7-9fd4-11f0-a25f-868e5b189c5b\") pod \"task-pv-pod-restore\" (UID: \"17880673-f81c-4abe-8af0-1d693004da3d\") " pod="default/task-pv-pod-restore"
	Oct 02 21:10:04 addons-806706 kubelet[1267]: I1002 21:10:04.184121    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/17880673-f81c-4abe-8af0-1d693004da3d-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"17880673-f81c-4abe-8af0-1d693004da3d\") " pod="default/task-pv-pod-restore"
	Oct 02 21:10:04 addons-806706 kubelet[1267]: I1002 21:10:04.184310    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbnqz\" (UniqueName: \"kubernetes.io/projected/17880673-f81c-4abe-8af0-1d693004da3d-kube-api-access-vbnqz\") pod \"task-pv-pod-restore\" (UID: \"17880673-f81c-4abe-8af0-1d693004da3d\") " pod="default/task-pv-pod-restore"
	Oct 02 21:10:04 addons-806706 kubelet[1267]: I1002 21:10:04.303853    1267 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-38f7c3cb-e77c-4b6c-83e0-1ad6bc4089b1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^29eba6d7-9fd4-11f0-a25f-868e5b189c5b\") pod \"task-pv-pod-restore\" (UID: \"17880673-f81c-4abe-8af0-1d693004da3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/f07cd8654d90f177a2fb0483872485e06355cb73927d608fe3451dd20fb6488e/globalmount\"" pod="default/task-pv-pod-restore"
	Oct 02 21:10:04 addons-806706 kubelet[1267]: W1002 21:10:04.425873    1267 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/crio-a45c156e98a289cb029cb0f9a1a212b57c37a8a0c7b81fd77c79a7f07793e6a9 WatchSource:0}: Error finding container a45c156e98a289cb029cb0f9a1a212b57c37a8a0c7b81fd77c79a7f07793e6a9: Status 404 returned error can't find the container with id a45c156e98a289cb029cb0f9a1a212b57c37a8a0c7b81fd77c79a7f07793e6a9
	Oct 02 21:10:05 addons-806706 kubelet[1267]: I1002 21:10:05.092888    1267 scope.go:117] "RemoveContainer" containerID="0cd1221c6c63095c5e39e1b2a8d1430e802efb9fd7f4566e364d451c53b30635"
	Oct 02 21:10:05 addons-806706 kubelet[1267]: I1002 21:10:05.111544    1267 scope.go:117] "RemoveContainer" containerID="4d3733585d723dbfdb6751ec2359366bed13c2b8a15c81eaa36794da422848ba"
	Oct 02 21:10:05 addons-806706 kubelet[1267]: I1002 21:10:05.123264    1267 scope.go:117] "RemoveContainer" containerID="d5e86468e4d42208908f2d60b285dc0a5bec4265f94e6cb6d80c66adaf0acff9"
	Oct 02 21:10:05 addons-806706 kubelet[1267]: E1002 21:10:05.219710    1267 manager.go:1116] Failed to create existing container: /docker/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/crio-ae12bd367b103259965a950b326d50800c3c9bd8fbe29ddaeea6dc1bff4f4458: Error finding container ae12bd367b103259965a950b326d50800c3c9bd8fbe29ddaeea6dc1bff4f4458: Status 404 returned error can't find the container with id ae12bd367b103259965a950b326d50800c3c9bd8fbe29ddaeea6dc1bff4f4458
	Oct 02 21:10:05 addons-806706 kubelet[1267]: E1002 21:10:05.254406    1267 manager.go:1116] Failed to create existing container: /crio-d71acd43efe51b3b669168c8d255ffde41b3cf0b7e9f294d549ec77ffe14af24: Error finding container d71acd43efe51b3b669168c8d255ffde41b3cf0b7e9f294d549ec77ffe14af24: Status 404 returned error can't find the container with id d71acd43efe51b3b669168c8d255ffde41b3cf0b7e9f294d549ec77ffe14af24
	Oct 02 21:10:05 addons-806706 kubelet[1267]: E1002 21:10:05.264426    1267 manager.go:1116] Failed to create existing container: /docker/9be5d6290945e82010bcf9524348f0acb4b0bb7bcce5a396c97d215e4482b326/crio-d71acd43efe51b3b669168c8d255ffde41b3cf0b7e9f294d549ec77ffe14af24: Error finding container d71acd43efe51b3b669168c8d255ffde41b3cf0b7e9f294d549ec77ffe14af24: Status 404 returned error can't find the container with id d71acd43efe51b3b669168c8d255ffde41b3cf0b7e9f294d549ec77ffe14af24
	Oct 02 21:10:05 addons-806706 kubelet[1267]: E1002 21:10:05.264902    1267 manager.go:1116] Failed to create existing container: /crio-0408cf930406bc6713471c1f7a7daf47e45bc825ad2833c1dee3e18de61944a4: Error finding container 0408cf930406bc6713471c1f7a7daf47e45bc825ad2833c1dee3e18de61944a4: Status 404 returned error can't find the container with id 0408cf930406bc6713471c1f7a7daf47e45bc825ad2833c1dee3e18de61944a4
	Oct 02 21:10:05 addons-806706 kubelet[1267]: E1002 21:10:05.265325    1267 manager.go:1116] Failed to create existing container: /crio-ae12bd367b103259965a950b326d50800c3c9bd8fbe29ddaeea6dc1bff4f4458: Error finding container ae12bd367b103259965a950b326d50800c3c9bd8fbe29ddaeea6dc1bff4f4458: Status 404 returned error can't find the container with id ae12bd367b103259965a950b326d50800c3c9bd8fbe29ddaeea6dc1bff4f4458
	Oct 02 21:10:05 addons-806706 kubelet[1267]: I1002 21:10:05.338711    1267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=0.995845454 podStartE2EDuration="1.338686757s" podCreationTimestamp="2025-10-02 21:10:04 +0000 UTC" firstStartedPulling="2025-10-02 21:10:04.432690686 +0000 UTC m=+179.542447245" lastFinishedPulling="2025-10-02 21:10:04.775531989 +0000 UTC m=+179.885288548" observedRunningTime="2025-10-02 21:10:05.336545835 +0000 UTC m=+180.446302394" watchObservedRunningTime="2025-10-02 21:10:05.338686757 +0000 UTC m=+180.448443324"
	
	
	==> storage-provisioner [e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd] <==
	W1002 21:09:39.372309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:41.375477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:41.383096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:43.387197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:43.395189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:45.406246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:45.424018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:47.427135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:47.432413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:49.435148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:49.440518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:51.443714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:51.449775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:53.452812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:53.466119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:55.469604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:55.473986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:57.476692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:57.481671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:59.484716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:09:59.489771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:10:01.492950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:10:01.497852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:10:03.503748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:10:03.512053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-806706 -n addons-806706
helpers_test.go:269: (dbg) Run:  kubectl --context addons-806706 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-r4gc4 ingress-nginx-admission-patch-4cbvw registry-creds-764b6fb674-v22d7
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-806706 describe pod ingress-nginx-admission-create-r4gc4 ingress-nginx-admission-patch-4cbvw registry-creds-764b6fb674-v22d7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-806706 describe pod ingress-nginx-admission-create-r4gc4 ingress-nginx-admission-patch-4cbvw registry-creds-764b6fb674-v22d7: exit status 1 (88.746041ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r4gc4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4cbvw" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-v22d7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-806706 describe pod ingress-nginx-admission-create-r4gc4 ingress-nginx-admission-patch-4cbvw registry-creds-764b6fb674-v22d7: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable headlamp --alsologtostderr -v=1: exit status 11 (267.098062ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:10:06.292986 1280888 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:10:06.293701 1280888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:06.293715 1280888 out.go:374] Setting ErrFile to fd 2...
	I1002 21:10:06.293721 1280888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:06.294111 1280888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:10:06.294486 1280888 mustload.go:65] Loading cluster: addons-806706
	I1002 21:10:06.294913 1280888 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:06.294934 1280888 addons.go:606] checking whether the cluster is paused
	I1002 21:10:06.295037 1280888 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:06.295058 1280888 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:10:06.295961 1280888 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:10:06.314410 1280888 ssh_runner.go:195] Run: systemctl --version
	I1002 21:10:06.314564 1280888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:10:06.332999 1280888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:10:06.430379 1280888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:10:06.430530 1280888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:10:06.463801 1280888 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:10:06.463820 1280888 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:10:06.463825 1280888 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:10:06.463830 1280888 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:10:06.463833 1280888 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:10:06.463837 1280888 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:10:06.463841 1280888 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:10:06.463844 1280888 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:10:06.463848 1280888 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:10:06.463859 1280888 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:10:06.463863 1280888 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:10:06.463866 1280888 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:10:06.463870 1280888 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:10:06.463873 1280888 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:10:06.463877 1280888 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:10:06.463886 1280888 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:10:06.463890 1280888 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:10:06.463895 1280888 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:10:06.463898 1280888 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:10:06.463902 1280888 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:10:06.463907 1280888 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:10:06.463910 1280888 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:10:06.463914 1280888 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:10:06.463917 1280888 cri.go:89] found id: ""
	I1002 21:10:06.463969 1280888 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:10:06.479843 1280888 out.go:203] 
	W1002 21:10:06.482765 1280888 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:10:06.482797 1280888 out.go:285] * 
	* 
	W1002 21:10:06.491700 1280888 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:10:06.494858 1280888 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-l4pxd" [89ceb7fb-f483-41da-a1ba-d29b20fa82b3] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003723729s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (288.442347ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:10:02.586283 1280303 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:10:02.587129 1280303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:02.587174 1280303 out.go:374] Setting ErrFile to fd 2...
	I1002 21:10:02.587196 1280303 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:10:02.587568 1280303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:10:02.587915 1280303 mustload.go:65] Loading cluster: addons-806706
	I1002 21:10:02.588332 1280303 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:02.588375 1280303 addons.go:606] checking whether the cluster is paused
	I1002 21:10:02.588545 1280303 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:10:02.588591 1280303 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:10:02.589115 1280303 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:10:02.607928 1280303 ssh_runner.go:195] Run: systemctl --version
	I1002 21:10:02.608001 1280303 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:10:02.636069 1280303 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:10:02.748650 1280303 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:10:02.748759 1280303 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:10:02.786345 1280303 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:10:02.786365 1280303 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:10:02.786370 1280303 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:10:02.786374 1280303 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:10:02.786378 1280303 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:10:02.786382 1280303 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:10:02.786386 1280303 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:10:02.786389 1280303 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:10:02.786392 1280303 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:10:02.786402 1280303 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:10:02.786405 1280303 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:10:02.786409 1280303 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:10:02.786412 1280303 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:10:02.786415 1280303 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:10:02.786418 1280303 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:10:02.786425 1280303 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:10:02.786431 1280303 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:10:02.786435 1280303 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:10:02.786439 1280303 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:10:02.786442 1280303 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:10:02.786446 1280303 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:10:02.786450 1280303 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:10:02.786453 1280303 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:10:02.786456 1280303 cri.go:89] found id: ""
	I1002 21:10:02.786510 1280303 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:10:02.801850 1280303 out.go:203] 
	W1002 21:10:02.804786 1280303 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:10:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:10:02.804817 1280303 out.go:285] * 
	* 
	W1002 21:10:02.813888 1280303 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:10:02.816790 1280303 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-806706 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-806706 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-806706 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [7b62467c-53e9-4a97-8007-a1962e312ddb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [7b62467c-53e9-4a97-8007-a1962e312ddb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [7b62467c-53e9-4a97-8007-a1962e312ddb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003585977s
addons_test.go:967: (dbg) Run:  kubectl --context addons-806706 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 ssh "cat /opt/local-path-provisioner/pvc-da87f400-b497-4b71-b4f9-fc0c7528e33b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-806706 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-806706 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (293.103416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:09:56.287115 1280174 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:09:56.288181 1280174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:56.288228 1280174 out.go:374] Setting ErrFile to fd 2...
	I1002 21:09:56.288252 1280174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:56.288564 1280174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:09:56.288925 1280174 mustload.go:65] Loading cluster: addons-806706
	I1002 21:09:56.289347 1280174 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:56.289385 1280174 addons.go:606] checking whether the cluster is paused
	I1002 21:09:56.289527 1280174 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:56.289568 1280174 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:09:56.290168 1280174 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:09:56.308133 1280174 ssh_runner.go:195] Run: systemctl --version
	I1002 21:09:56.308191 1280174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:09:56.326776 1280174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:09:56.421060 1280174 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:09:56.421206 1280174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:09:56.462167 1280174 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:09:56.462233 1280174 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:09:56.462252 1280174 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:09:56.462277 1280174 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:09:56.462308 1280174 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:09:56.462327 1280174 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:09:56.462351 1280174 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:09:56.462361 1280174 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:09:56.462365 1280174 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:09:56.462372 1280174 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:09:56.462375 1280174 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:09:56.462379 1280174 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:09:56.462383 1280174 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:09:56.462386 1280174 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:09:56.462389 1280174 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:09:56.462394 1280174 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:09:56.462399 1280174 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:09:56.462403 1280174 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:09:56.462407 1280174 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:09:56.462410 1280174 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:09:56.462416 1280174 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:09:56.462419 1280174 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:09:56.462422 1280174 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:09:56.462425 1280174 cri.go:89] found id: ""
	I1002 21:09:56.462484 1280174 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:09:56.494847 1280174 out.go:203] 
	W1002 21:09:56.498177 1280174 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:09:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:09:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:09:56.498275 1280174 out.go:285] * 
	* 
	W1002 21:09:56.509715 1280174 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:09:56.514132 1280174 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-x2b9d" [96ac05b1-31d6-4e01-a2a7-5a341a5f6bf3] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004941178s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (282.08203ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:09:47.787550 1279839 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:09:47.788253 1279839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:47.788267 1279839 out.go:374] Setting ErrFile to fd 2...
	I1002 21:09:47.788272 1279839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:47.788535 1279839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:09:47.788870 1279839 mustload.go:65] Loading cluster: addons-806706
	I1002 21:09:47.789226 1279839 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:47.789245 1279839 addons.go:606] checking whether the cluster is paused
	I1002 21:09:47.789346 1279839 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:47.789361 1279839 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:09:47.789858 1279839 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:09:47.809390 1279839 ssh_runner.go:195] Run: systemctl --version
	I1002 21:09:47.809467 1279839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:09:47.835073 1279839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:09:47.933028 1279839 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:09:47.933110 1279839 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:09:47.969968 1279839 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:09:47.969992 1279839 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:09:47.969997 1279839 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:09:47.970001 1279839 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:09:47.970009 1279839 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:09:47.970014 1279839 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:09:47.970016 1279839 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:09:47.970020 1279839 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:09:47.970022 1279839 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:09:47.970054 1279839 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:09:47.970061 1279839 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:09:47.970064 1279839 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:09:47.970067 1279839 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:09:47.970070 1279839 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:09:47.970073 1279839 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:09:47.970078 1279839 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:09:47.970085 1279839 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:09:47.970089 1279839 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:09:47.970092 1279839 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:09:47.970095 1279839 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:09:47.970100 1279839 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:09:47.970104 1279839 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:09:47.970107 1279839 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:09:47.970110 1279839 cri.go:89] found id: ""
	I1002 21:09:47.970160 1279839 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:09:47.985645 1279839 out.go:203] 
	W1002 21:09:47.988610 1279839 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:09:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:09:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:09:47.988642 1279839 out.go:285] * 
	* 
	W1002 21:09:47.997524 1279839 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:09:48.003418 1279839 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.29s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-ldmgf" [e903cee7-ffd2-45d6-8a4a-269c40027f4f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002874714s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-806706 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-806706 addons disable yakd --alsologtostderr -v=1: exit status 11 (265.189545ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:09:30.507492 1279345 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:09:30.508379 1279345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:30.508421 1279345 out.go:374] Setting ErrFile to fd 2...
	I1002 21:09:30.508445 1279345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:30.508827 1279345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:09:30.509580 1279345 mustload.go:65] Loading cluster: addons-806706
	I1002 21:09:30.510526 1279345 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:30.510793 1279345 addons.go:606] checking whether the cluster is paused
	I1002 21:09:30.510943 1279345 config.go:182] Loaded profile config "addons-806706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:30.510988 1279345 host.go:66] Checking if "addons-806706" exists ...
	I1002 21:09:30.511493 1279345 cli_runner.go:164] Run: docker container inspect addons-806706 --format={{.State.Status}}
	I1002 21:09:30.530347 1279345 ssh_runner.go:195] Run: systemctl --version
	I1002 21:09:30.530406 1279345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-806706
	I1002 21:09:30.548597 1279345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/addons-806706/id_rsa Username:docker}
	I1002 21:09:30.644735 1279345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:09:30.644812 1279345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:09:30.674777 1279345 cri.go:89] found id: "c49402c09d33e86eb9bf2c0f60afdd942f6c172a1b21beb2bc742f3c8216e673"
	I1002 21:09:30.674799 1279345 cri.go:89] found id: "c8b471a8c48404fadbc30a2928553b9b2074f1f4d34679a77bde7011048b4160"
	I1002 21:09:30.674805 1279345 cri.go:89] found id: "eac22bc4229e11c90373910077d832c76ffe8bb709da920bdba3b75bd30bca48"
	I1002 21:09:30.674809 1279345 cri.go:89] found id: "2c7540d82769e54a760e6d58463df1c48f5e7f1d12a3f70933dd92ccad94f0be"
	I1002 21:09:30.674812 1279345 cri.go:89] found id: "3da90336e0b30f2e2d011b5f68df49818e3fb7e94f41eedb5ba1ae0dafca2fd6"
	I1002 21:09:30.674816 1279345 cri.go:89] found id: "3ebf774e4e0f2653b187a12b88ec9321050a368a9a85d5459c59cd0391e63932"
	I1002 21:09:30.674819 1279345 cri.go:89] found id: "7f5b74ab76d021036229f75097623097535cc967f659718443200cc0618be75b"
	I1002 21:09:30.674822 1279345 cri.go:89] found id: "a9334e6a8404e664f47caaed889f2f9bd9fa0d1bbc713abf96b480790bc225d5"
	I1002 21:09:30.674825 1279345 cri.go:89] found id: "d663f5ea76e437fdfc1fa82a62d6150ed7b5f318f07d17589feaa24b493e7dca"
	I1002 21:09:30.674831 1279345 cri.go:89] found id: "6678f7b4945988b650a1763e110df23486fe53cb3a32240cc57d1933178fd6a9"
	I1002 21:09:30.674834 1279345 cri.go:89] found id: "9fdc4fbb9694bbc0f3869abe89b24fad8a4509861a7b0c6c1823ab3b1a9e24c7"
	I1002 21:09:30.674838 1279345 cri.go:89] found id: "d8a82613cbbf787cf54ac252960b137540f11452d2507ba740f25f4a1c46a556"
	I1002 21:09:30.674841 1279345 cri.go:89] found id: "a3db0f10ee2bd872f72ace61e632808cceb70cf2b25dcf9a2b842336a957fefb"
	I1002 21:09:30.674844 1279345 cri.go:89] found id: "7e58d89c526a1c5d8c4616d7e685dbcf9c00fa3d753659f4bcc95b2f6740622d"
	I1002 21:09:30.674848 1279345 cri.go:89] found id: "b5c710f4aca2845c88e1594434ce5da200db539419e41f4cec0acc6b58679222"
	I1002 21:09:30.674853 1279345 cri.go:89] found id: "96b63ced4cbec9bfd1c68810fe92044e6b3d506e95cae37e5bebf59cd277e1c1"
	I1002 21:09:30.674856 1279345 cri.go:89] found id: "e06a73003aabb7ab3fff71c0bb0c37950498bc051a75251a6c4c24461af927cd"
	I1002 21:09:30.674860 1279345 cri.go:89] found id: "c3c9833a8ac94a3905c2967449ea1affffc8188eb925aeca3a3fc71ad8f861f0"
	I1002 21:09:30.674864 1279345 cri.go:89] found id: "3ce054faf2e39fe793ffa65b0a59d07d97002a36f8c9af8fac742a98bd71eb33"
	I1002 21:09:30.674867 1279345 cri.go:89] found id: "d78cdb898250ba40c89eebf546a8c8d43fd2f5968c0d7d10f6fd1a24461764ee"
	I1002 21:09:30.674871 1279345 cri.go:89] found id: "edee49f1f2c302945ab5757a9edffc10cf0303b82c22f22747489b30497412bd"
	I1002 21:09:30.674875 1279345 cri.go:89] found id: "5899705446a85043d59caed9ec9a6e861a61992fdacc986da6e0b4a84261981d"
	I1002 21:09:30.674878 1279345 cri.go:89] found id: "e997bf38b55bf38416bbdac28f87a5b88510acf65813d017a74a785c7466f39c"
	I1002 21:09:30.674881 1279345 cri.go:89] found id: ""
	I1002 21:09:30.674932 1279345 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 21:09:30.695232 1279345 out.go:203] 
	W1002 21:09:30.699827 1279345 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:09:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:09:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 21:09:30.699852 1279345 out.go:285] * 
	* 
	W1002 21:09:30.708404 1279345 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:09:30.712504 1279345 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-806706 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestForceSystemdFlag (510.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-292135 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-292135 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m26.9512298s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-292135] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-292135" primary control-plane node in "force-systemd-flag-292135" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:02:26.581047 1433697 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:02:26.581230 1433697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:02:26.581239 1433697 out.go:374] Setting ErrFile to fd 2...
	I1002 22:02:26.581244 1433697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:02:26.581515 1433697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:02:26.581923 1433697 out.go:368] Setting JSON to false
	I1002 22:02:26.582897 1433697 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24272,"bootTime":1759418275,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:02:26.582970 1433697 start.go:140] virtualization:  
	I1002 22:02:26.588330 1433697 out.go:179] * [force-systemd-flag-292135] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:02:26.593326 1433697 notify.go:220] Checking for updates...
	I1002 22:02:26.593920 1433697 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:02:26.597925 1433697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:02:26.601894 1433697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:02:26.605336 1433697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:02:26.608610 1433697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:02:26.611840 1433697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:02:26.615665 1433697 config.go:182] Loaded profile config "kubernetes-upgrade-186867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:02:26.615844 1433697 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:02:26.640801 1433697 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:02:26.640935 1433697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:02:26.750970 1433697 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:02:26.737958579 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:02:26.751080 1433697 docker.go:318] overlay module found
	I1002 22:02:26.754401 1433697 out.go:179] * Using the docker driver based on user configuration
	I1002 22:02:26.757304 1433697 start.go:304] selected driver: docker
	I1002 22:02:26.757320 1433697 start.go:924] validating driver "docker" against <nil>
	I1002 22:02:26.757334 1433697 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:02:26.758087 1433697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:02:26.852649 1433697 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:02:26.843556879 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:02:26.852797 1433697 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 22:02:26.853011 1433697 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 22:02:26.856252 1433697 out.go:179] * Using Docker driver with root privileges
	I1002 22:02:26.859302 1433697 cni.go:84] Creating CNI manager for ""
	I1002 22:02:26.859375 1433697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:02:26.859385 1433697 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 22:02:26.859461 1433697 start.go:348] cluster config:
	{Name:force-systemd-flag-292135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-292135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:02:26.864544 1433697 out.go:179] * Starting "force-systemd-flag-292135" primary control-plane node in "force-systemd-flag-292135" cluster
	I1002 22:02:26.867404 1433697 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:02:26.870336 1433697 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:02:26.873151 1433697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:02:26.873204 1433697 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:02:26.873218 1433697 cache.go:58] Caching tarball of preloaded images
	I1002 22:02:26.873304 1433697 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:02:26.873314 1433697 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:02:26.873415 1433697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/config.json ...
	I1002 22:02:26.873432 1433697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/config.json: {Name:mk8af3bb93cae64445de3a210d5f1128f5a8f885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:26.873586 1433697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:02:26.900773 1433697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:02:26.900793 1433697 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:02:26.900806 1433697 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:02:26.900829 1433697 start.go:360] acquireMachinesLock for force-systemd-flag-292135: {Name:mk4427119d7f4fbb46f19d867515229df994dc22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:02:26.900932 1433697 start.go:364] duration metric: took 87.383µs to acquireMachinesLock for "force-systemd-flag-292135"
	I1002 22:02:26.900958 1433697 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-292135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-292135 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:02:26.901026 1433697 start.go:125] createHost starting for "" (driver="docker")
	I1002 22:02:26.904519 1433697 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 22:02:26.904780 1433697 start.go:159] libmachine.API.Create for "force-systemd-flag-292135" (driver="docker")
	I1002 22:02:26.904830 1433697 client.go:168] LocalClient.Create starting
	I1002 22:02:26.904904 1433697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem
	I1002 22:02:26.904936 1433697 main.go:141] libmachine: Decoding PEM data...
	I1002 22:02:26.904953 1433697 main.go:141] libmachine: Parsing certificate...
	I1002 22:02:26.905005 1433697 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem
	I1002 22:02:26.905024 1433697 main.go:141] libmachine: Decoding PEM data...
	I1002 22:02:26.905034 1433697 main.go:141] libmachine: Parsing certificate...
	I1002 22:02:26.905393 1433697 cli_runner.go:164] Run: docker network inspect force-systemd-flag-292135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 22:02:26.921698 1433697 cli_runner.go:211] docker network inspect force-systemd-flag-292135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 22:02:26.921810 1433697 network_create.go:284] running [docker network inspect force-systemd-flag-292135] to gather additional debugging logs...
	I1002 22:02:26.921827 1433697 cli_runner.go:164] Run: docker network inspect force-systemd-flag-292135
	W1002 22:02:26.943995 1433697 cli_runner.go:211] docker network inspect force-systemd-flag-292135 returned with exit code 1
	I1002 22:02:26.944026 1433697 network_create.go:287] error running [docker network inspect force-systemd-flag-292135]: docker network inspect force-systemd-flag-292135: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-292135 not found
	I1002 22:02:26.944047 1433697 network_create.go:289] output of [docker network inspect force-systemd-flag-292135]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-292135 not found
	
	** /stderr **
	I1002 22:02:26.944146 1433697 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:02:26.979440 1433697 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bc3b296e1a81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:eb:59:11:9d:81} reservation:<nil>}
	I1002 22:02:26.979779 1433697 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4d7d491e9676 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:00:74:bd:3c:5f} reservation:<nil>}
	I1002 22:02:26.980141 1433697 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-314191adf21d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:ac:91:58:2a:d7} reservation:<nil>}
	I1002 22:02:26.980535 1433697 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400193dad0}
	I1002 22:02:26.980554 1433697 network_create.go:124] attempt to create docker network force-systemd-flag-292135 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 22:02:26.980614 1433697 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-292135 force-systemd-flag-292135
	I1002 22:02:27.058102 1433697 network_create.go:108] docker network force-systemd-flag-292135 192.168.76.0/24 created
	I1002 22:02:27.058133 1433697 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-292135" container
	I1002 22:02:27.058220 1433697 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 22:02:27.092978 1433697 cli_runner.go:164] Run: docker volume create force-systemd-flag-292135 --label name.minikube.sigs.k8s.io=force-systemd-flag-292135 --label created_by.minikube.sigs.k8s.io=true
	I1002 22:02:27.118555 1433697 oci.go:103] Successfully created a docker volume force-systemd-flag-292135
	I1002 22:02:27.118653 1433697 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-292135-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-292135 --entrypoint /usr/bin/test -v force-systemd-flag-292135:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 22:02:27.658663 1433697 oci.go:107] Successfully prepared a docker volume force-systemd-flag-292135
	I1002 22:02:27.658716 1433697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:02:27.658736 1433697 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 22:02:27.658819 1433697 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-292135:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 22:02:32.066925 1433697 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-292135:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.408062321s)
	I1002 22:02:32.066954 1433697 kic.go:203] duration metric: took 4.408214245s to extract preloaded images to volume ...
	W1002 22:02:32.067096 1433697 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 22:02:32.067196 1433697 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 22:02:32.155210 1433697 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-292135 --name force-systemd-flag-292135 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-292135 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-292135 --network force-systemd-flag-292135 --ip 192.168.76.2 --volume force-systemd-flag-292135:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 22:02:32.582954 1433697 cli_runner.go:164] Run: docker container inspect force-systemd-flag-292135 --format={{.State.Running}}
	I1002 22:02:32.619449 1433697 cli_runner.go:164] Run: docker container inspect force-systemd-flag-292135 --format={{.State.Status}}
	I1002 22:02:32.650891 1433697 cli_runner.go:164] Run: docker exec force-systemd-flag-292135 stat /var/lib/dpkg/alternatives/iptables
	I1002 22:02:32.717623 1433697 oci.go:144] the created container "force-systemd-flag-292135" has a running status.
	I1002 22:02:32.717665 1433697 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-flag-292135/id_rsa...
	I1002 22:02:33.711224 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-flag-292135/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 22:02:33.711273 1433697 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-flag-292135/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 22:02:33.742000 1433697 cli_runner.go:164] Run: docker container inspect force-systemd-flag-292135 --format={{.State.Status}}
	I1002 22:02:33.763851 1433697 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 22:02:33.763875 1433697 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-292135 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 22:02:33.808242 1433697 cli_runner.go:164] Run: docker container inspect force-systemd-flag-292135 --format={{.State.Status}}
	I1002 22:02:33.827100 1433697 machine.go:93] provisionDockerMachine start ...
	I1002 22:02:33.827200 1433697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-292135
	I1002 22:02:33.849704 1433697 main.go:141] libmachine: Using SSH client type: native
	I1002 22:02:33.850073 1433697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34531 <nil> <nil>}
	I1002 22:02:33.850093 1433697 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:02:33.997805 1433697 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-292135
	
	I1002 22:02:33.997830 1433697 ubuntu.go:182] provisioning hostname "force-systemd-flag-292135"
	I1002 22:02:33.997907 1433697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-292135
	I1002 22:02:34.024211 1433697 main.go:141] libmachine: Using SSH client type: native
	I1002 22:02:34.024530 1433697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34531 <nil> <nil>}
	I1002 22:02:34.024547 1433697 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-292135 && echo "force-systemd-flag-292135" | sudo tee /etc/hostname
	I1002 22:02:34.179970 1433697 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-292135
	
	I1002 22:02:34.180067 1433697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-292135
	I1002 22:02:34.200189 1433697 main.go:141] libmachine: Using SSH client type: native
	I1002 22:02:34.200518 1433697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34531 <nil> <nil>}
	I1002 22:02:34.200544 1433697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-292135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-292135/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-292135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:02:34.334662 1433697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:02:34.334688 1433697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:02:34.334708 1433697 ubuntu.go:190] setting up certificates
	I1002 22:02:34.334718 1433697 provision.go:84] configureAuth start
	I1002 22:02:34.334794 1433697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-292135
	I1002 22:02:34.354439 1433697 provision.go:143] copyHostCerts
	I1002 22:02:34.354494 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:02:34.354527 1433697 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:02:34.354538 1433697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:02:34.354619 1433697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:02:34.354713 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:02:34.354741 1433697 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:02:34.354746 1433697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:02:34.354773 1433697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:02:34.354831 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:02:34.354850 1433697 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:02:34.354855 1433697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:02:34.354890 1433697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:02:34.354955 1433697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-292135 san=[127.0.0.1 192.168.76.2 force-systemd-flag-292135 localhost minikube]
	I1002 22:02:34.874813 1433697 provision.go:177] copyRemoteCerts
	I1002 22:02:34.874889 1433697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:02:34.874934 1433697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-292135
	I1002 22:02:34.895851 1433697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34531 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-flag-292135/id_rsa Username:docker}
	I1002 22:02:34.994044 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 22:02:34.994105 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:02:35.019314 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 22:02:35.019419 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 22:02:35.037867 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 22:02:35.037997 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:02:35.057311 1433697 provision.go:87] duration metric: took 722.568645ms to configureAuth
	I1002 22:02:35.057341 1433697 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:02:35.057536 1433697 config.go:182] Loaded profile config "force-systemd-flag-292135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:02:35.057648 1433697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-292135
	I1002 22:02:35.075332 1433697 main.go:141] libmachine: Using SSH client type: native
	I1002 22:02:35.075638 1433697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34531 <nil> <nil>}
	I1002 22:02:35.075656 1433697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:02:35.320328 1433697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:02:35.320404 1433697 machine.go:96] duration metric: took 1.493283533s to provisionDockerMachine
	I1002 22:02:35.320440 1433697 client.go:171] duration metric: took 8.415602598s to LocalClient.Create
	I1002 22:02:35.320491 1433697 start.go:167] duration metric: took 8.415711839s to libmachine.API.Create "force-systemd-flag-292135"
	I1002 22:02:35.320519 1433697 start.go:293] postStartSetup for "force-systemd-flag-292135" (driver="docker")
	I1002 22:02:35.320556 1433697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:02:35.320651 1433697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:02:35.320723 1433697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-292135
	I1002 22:02:35.339772 1433697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34531 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-flag-292135/id_rsa Username:docker}
	I1002 22:02:35.442261 1433697 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:02:35.445865 1433697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:02:35.445895 1433697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:02:35.445906 1433697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:02:35.445963 1433697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:02:35.446075 1433697 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:02:35.446087 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> /etc/ssl/certs/12725142.pem
	I1002 22:02:35.446189 1433697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:02:35.453634 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:02:35.471575 1433697 start.go:296] duration metric: took 151.027319ms for postStartSetup
	I1002 22:02:35.471955 1433697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-292135
	I1002 22:02:35.489559 1433697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/config.json ...
	I1002 22:02:35.489856 1433697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:02:35.489908 1433697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-292135
	I1002 22:02:35.507798 1433697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34531 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-flag-292135/id_rsa Username:docker}
	I1002 22:02:35.603069 1433697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:02:35.607585 1433697 start.go:128] duration metric: took 8.706545131s to createHost
	I1002 22:02:35.607611 1433697 start.go:83] releasing machines lock for "force-systemd-flag-292135", held for 8.706670504s
	I1002 22:02:35.607681 1433697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-292135
	I1002 22:02:35.625388 1433697 ssh_runner.go:195] Run: cat /version.json
	I1002 22:02:35.625440 1433697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-292135
	I1002 22:02:35.625447 1433697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:02:35.625500 1433697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-292135
	I1002 22:02:35.645538 1433697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34531 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-flag-292135/id_rsa Username:docker}
	I1002 22:02:35.648800 1433697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34531 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-flag-292135/id_rsa Username:docker}
	I1002 22:02:35.738081 1433697 ssh_runner.go:195] Run: systemctl --version
	I1002 22:02:35.831804 1433697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:02:35.874954 1433697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:02:35.879485 1433697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:02:35.879584 1433697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:02:35.909883 1433697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 22:02:35.909930 1433697 start.go:495] detecting cgroup driver to use...
	I1002 22:02:35.909944 1433697 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1002 22:02:35.910014 1433697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:02:35.927814 1433697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:02:35.941032 1433697 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:02:35.941144 1433697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:02:35.960491 1433697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:02:35.979792 1433697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:02:36.105489 1433697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:02:36.220843 1433697 docker.go:234] disabling docker service ...
	I1002 22:02:36.220956 1433697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:02:36.242713 1433697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:02:36.255932 1433697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:02:36.371719 1433697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:02:36.482388 1433697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:02:36.496626 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:02:36.510507 1433697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:02:36.510610 1433697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:36.520244 1433697 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 22:02:36.520313 1433697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:36.529896 1433697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:36.539960 1433697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:36.548786 1433697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:02:36.556797 1433697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:36.565631 1433697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:36.579378 1433697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:36.588488 1433697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:02:36.595792 1433697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:02:36.603038 1433697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:02:36.715943 1433697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:02:36.857932 1433697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:02:36.858139 1433697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:02:36.862157 1433697 start.go:563] Will wait 60s for crictl version
	I1002 22:02:36.862270 1433697 ssh_runner.go:195] Run: which crictl
	I1002 22:02:36.866380 1433697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:02:36.892711 1433697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:02:36.892883 1433697 ssh_runner.go:195] Run: crio --version
	I1002 22:02:36.924492 1433697 ssh_runner.go:195] Run: crio --version
	I1002 22:02:36.957591 1433697 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:02:36.960455 1433697 cli_runner.go:164] Run: docker network inspect force-systemd-flag-292135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:02:36.976841 1433697 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:02:36.980688 1433697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:02:36.990684 1433697 kubeadm.go:883] updating cluster {Name:force-systemd-flag-292135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-292135 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:02:36.990809 1433697 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:02:36.990875 1433697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:02:37.035359 1433697 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:02:37.035386 1433697 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:02:37.035456 1433697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:02:37.063309 1433697 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:02:37.063331 1433697 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:02:37.063339 1433697 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 22:02:37.063433 1433697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-292135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-292135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:02:37.063520 1433697 ssh_runner.go:195] Run: crio config
	I1002 22:02:37.134099 1433697 cni.go:84] Creating CNI manager for ""
	I1002 22:02:37.134172 1433697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:02:37.134203 1433697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:02:37.134255 1433697 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-292135 NodeName:force-systemd-flag-292135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:02:37.134441 1433697 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-292135"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:02:37.134557 1433697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:02:37.142455 1433697 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:02:37.142563 1433697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:02:37.150301 1433697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1002 22:02:37.163169 1433697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:02:37.176986 1433697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1002 22:02:37.191177 1433697 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:02:37.195008 1433697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:02:37.205000 1433697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:02:37.326242 1433697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:02:37.347064 1433697 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135 for IP: 192.168.76.2
	I1002 22:02:37.347089 1433697 certs.go:195] generating shared ca certs ...
	I1002 22:02:37.347105 1433697 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:37.347234 1433697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:02:37.347297 1433697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:02:37.347309 1433697 certs.go:257] generating profile certs ...
	I1002 22:02:37.347369 1433697 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/client.key
	I1002 22:02:37.347384 1433697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/client.crt with IP's: []
	I1002 22:02:37.571390 1433697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/client.crt ...
	I1002 22:02:37.571420 1433697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/client.crt: {Name:mkf92c59999245ea9cff1f944ef5bad0400bd4b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:37.571625 1433697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/client.key ...
	I1002 22:02:37.571641 1433697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/client.key: {Name:mk2bff43730970acf8d627cbb4d9a297b0ff3439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:37.571741 1433697 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.key.343aa61c
	I1002 22:02:37.571760 1433697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.crt.343aa61c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 22:02:38.081371 1433697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.crt.343aa61c ...
	I1002 22:02:38.081410 1433697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.crt.343aa61c: {Name:mk8c12be0e413855cc0d59f07fdb9cedff1b2f6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:38.081630 1433697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.key.343aa61c ...
	I1002 22:02:38.081649 1433697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.key.343aa61c: {Name:mk3755e9f8b6f7ce3340ae1b194a9ea60003d80f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:38.081744 1433697 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.crt.343aa61c -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.crt
	I1002 22:02:38.081826 1433697 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.key.343aa61c -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.key
	I1002 22:02:38.081888 1433697 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/proxy-client.key
	I1002 22:02:38.081907 1433697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/proxy-client.crt with IP's: []
	I1002 22:02:38.342736 1433697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/proxy-client.crt ...
	I1002 22:02:38.342767 1433697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/proxy-client.crt: {Name:mk60db2d02dd02a848511f425751618c351d9997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:38.342995 1433697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/proxy-client.key ...
	I1002 22:02:38.343012 1433697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/proxy-client.key: {Name:mk43f298569a24ff71f494a56be83cfec18ee75e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:38.343111 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 22:02:38.343135 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 22:02:38.343152 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 22:02:38.343170 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 22:02:38.343188 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 22:02:38.343202 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 22:02:38.343226 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 22:02:38.343241 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 22:02:38.343294 1433697 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:02:38.343332 1433697 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:02:38.343346 1433697 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:02:38.343369 1433697 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:02:38.343396 1433697 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:02:38.343423 1433697 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:02:38.343472 1433697 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:02:38.343512 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> /usr/share/ca-certificates/12725142.pem
	I1002 22:02:38.343529 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:02:38.343544 1433697 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem -> /usr/share/ca-certificates/1272514.pem
	I1002 22:02:38.344089 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:02:38.369942 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:02:38.391826 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:02:38.412684 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:02:38.435797 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 22:02:38.458499 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:02:38.485217 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:02:38.509853 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-flag-292135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:02:38.530728 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:02:38.553478 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:02:38.576946 1433697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:02:38.606579 1433697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:02:38.630135 1433697 ssh_runner.go:195] Run: openssl version
	I1002 22:02:38.638023 1433697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:02:38.647722 1433697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:02:38.651966 1433697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:02:38.652028 1433697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:02:38.694348 1433697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:02:38.706259 1433697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:02:38.716661 1433697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:02:38.721132 1433697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:02:38.721206 1433697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:02:38.767857 1433697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:02:38.777044 1433697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:02:38.786982 1433697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:02:38.791381 1433697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:02:38.791451 1433697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:02:38.833137 1433697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:02:38.841690 1433697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:02:38.845734 1433697 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 22:02:38.845793 1433697 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-292135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-292135 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:02:38.845882 1433697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:02:38.845953 1433697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:02:38.878359 1433697 cri.go:89] found id: ""
	I1002 22:02:38.878448 1433697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:02:38.886570 1433697 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:02:38.894658 1433697 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:02:38.894723 1433697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:02:38.906198 1433697 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:02:38.906217 1433697 kubeadm.go:157] found existing configuration files:
	
	I1002 22:02:38.906270 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:02:38.918639 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:02:38.918747 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:02:38.926445 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:02:38.936864 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:02:38.936974 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:02:38.944689 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:02:38.953657 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:02:38.953774 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:02:38.961514 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:02:38.970583 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:02:38.970706 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:02:38.978350 1433697 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:02:39.023813 1433697 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:02:39.024085 1433697 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:02:39.059419 1433697 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:02:39.059570 1433697 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:02:39.059622 1433697 kubeadm.go:318] OS: Linux
	I1002 22:02:39.059700 1433697 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:02:39.059844 1433697 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:02:39.059964 1433697 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:02:39.060047 1433697 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:02:39.060166 1433697 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:02:39.060778 1433697 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:02:39.060860 1433697 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:02:39.060923 1433697 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:02:39.060987 1433697 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:02:39.132815 1433697 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:02:39.133034 1433697 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:02:39.133142 1433697 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:02:39.141692 1433697 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:02:39.148238 1433697 out.go:252]   - Generating certificates and keys ...
	I1002 22:02:39.148409 1433697 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:02:39.148509 1433697 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:02:39.764338 1433697 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 22:02:41.290061 1433697 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 22:02:41.865394 1433697 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 22:02:42.264651 1433697 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 22:02:42.353533 1433697 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 22:02:42.354084 1433697 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-292135 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:02:43.161783 1433697 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 22:02:43.162174 1433697 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-292135 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:02:43.418166 1433697 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 22:02:43.893467 1433697 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 22:02:44.964855 1433697 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:02:44.965126 1433697 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:02:46.298294 1433697 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:02:46.478238 1433697 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:02:46.924187 1433697 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:02:47.161512 1433697 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:02:47.285896 1433697 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:02:47.286765 1433697 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:02:47.289550 1433697 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:02:47.293147 1433697 out.go:252]   - Booting up control plane ...
	I1002 22:02:47.293274 1433697 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:02:47.293379 1433697 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:02:47.293466 1433697 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:02:47.309397 1433697 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:02:47.309514 1433697 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:02:47.317877 1433697 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:02:47.318310 1433697 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:02:47.318547 1433697 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:02:47.464205 1433697 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:02:47.464337 1433697 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:02:48.965767 1433697 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501701778s
	I1002 22:02:48.971622 1433697 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:02:48.971729 1433697 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 22:02:48.971823 1433697 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:02:48.971904 1433697 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:06:48.970051 1433697 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000161419s
	I1002 22:06:48.970975 1433697 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00060891s
	I1002 22:06:48.971410 1433697 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00121086s
	I1002 22:06:48.971435 1433697 kubeadm.go:318] 
	I1002 22:06:48.971533 1433697 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 22:06:48.971624 1433697 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 22:06:48.971725 1433697 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 22:06:48.972084 1433697 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 22:06:48.972173 1433697 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 22:06:48.972256 1433697 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 22:06:48.972261 1433697 kubeadm.go:318] 
	I1002 22:06:48.976310 1433697 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:06:48.976550 1433697 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:06:48.976667 1433697 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:06:48.977294 1433697 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 22:06:48.977370 1433697 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 22:06:48.977499 1433697 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-292135 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-292135 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501701778s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000161419s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00060891s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00121086s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-292135 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-292135 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501701778s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000161419s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00060891s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00121086s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 22:06:48.977584 1433697 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 22:06:49.514090 1433697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:06:49.527101 1433697 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:06:49.527168 1433697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:06:49.536813 1433697 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:06:49.536837 1433697 kubeadm.go:157] found existing configuration files:
	
	I1002 22:06:49.536889 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:06:49.545354 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:06:49.545510 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:06:49.553150 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:06:49.560864 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:06:49.560926 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:06:49.573071 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:06:49.580833 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:06:49.580900 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:06:49.588277 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:06:49.595558 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:06:49.595668 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:06:49.603171 1433697 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:06:49.670321 1433697 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:06:49.670593 1433697 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:06:49.740522 1433697 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:10:52.920190 1433697 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 22:10:52.920358 1433697 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 22:10:52.923108 1433697 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:10:52.923165 1433697 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:10:52.923267 1433697 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:10:52.923324 1433697 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:10:52.923367 1433697 kubeadm.go:318] OS: Linux
	I1002 22:10:52.923443 1433697 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:10:52.923506 1433697 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:10:52.923673 1433697 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:10:52.923736 1433697 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:10:52.923793 1433697 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:10:52.923850 1433697 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:10:52.923913 1433697 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:10:52.923971 1433697 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:10:52.924049 1433697 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:10:52.924142 1433697 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:10:52.924273 1433697 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:10:52.924377 1433697 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:10:52.924450 1433697 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:10:52.930904 1433697 out.go:252]   - Generating certificates and keys ...
	I1002 22:10:52.931006 1433697 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:10:52.931082 1433697 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:10:52.931162 1433697 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 22:10:52.931230 1433697 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 22:10:52.931309 1433697 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 22:10:52.931365 1433697 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 22:10:52.931432 1433697 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 22:10:52.931496 1433697 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 22:10:52.931576 1433697 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 22:10:52.931653 1433697 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 22:10:52.931695 1433697 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 22:10:52.931753 1433697 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:10:52.931807 1433697 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:10:52.931869 1433697 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:10:52.931925 1433697 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:10:52.931991 1433697 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:10:52.932049 1433697 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:10:52.932135 1433697 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:10:52.932204 1433697 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:10:52.935194 1433697 out.go:252]   - Booting up control plane ...
	I1002 22:10:52.935325 1433697 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:10:52.935423 1433697 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:10:52.935499 1433697 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:10:52.935611 1433697 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:10:52.935710 1433697 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:10:52.935821 1433697 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:10:52.935911 1433697 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:10:52.935956 1433697 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:10:52.936095 1433697 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:10:52.936205 1433697 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:10:52.936268 1433697 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50152796s
	I1002 22:10:52.936367 1433697 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:10:52.936454 1433697 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 22:10:52.936550 1433697 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:10:52.936636 1433697 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:10:52.936718 1433697 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000356149s
	I1002 22:10:52.936804 1433697 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000210668s
	I1002 22:10:52.936887 1433697 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000739305s
	I1002 22:10:52.936897 1433697 kubeadm.go:318] 
	I1002 22:10:52.936991 1433697 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 22:10:52.937079 1433697 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 22:10:52.937179 1433697 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 22:10:52.937280 1433697 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 22:10:52.937360 1433697 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 22:10:52.937450 1433697 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 22:10:52.937516 1433697 kubeadm.go:402] duration metric: took 8m14.091729136s to StartCluster
	I1002 22:10:52.937554 1433697 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:10:52.937620 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:10:52.937711 1433697 kubeadm.go:318] 
	I1002 22:10:52.970595 1433697 cri.go:89] found id: ""
	I1002 22:10:52.970634 1433697 logs.go:282] 0 containers: []
	W1002 22:10:52.970643 1433697 logs.go:284] No container was found matching "kube-apiserver"
	I1002 22:10:52.970650 1433697 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:10:52.970715 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:10:53.010102 1433697 cri.go:89] found id: ""
	I1002 22:10:53.010127 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.010137 1433697 logs.go:284] No container was found matching "etcd"
	I1002 22:10:53.010145 1433697 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:10:53.010211 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:10:53.038224 1433697 cri.go:89] found id: ""
	I1002 22:10:53.038249 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.038258 1433697 logs.go:284] No container was found matching "coredns"
	I1002 22:10:53.038265 1433697 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:10:53.038330 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:10:53.069946 1433697 cri.go:89] found id: ""
	I1002 22:10:53.069973 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.069982 1433697 logs.go:284] No container was found matching "kube-scheduler"
	I1002 22:10:53.069989 1433697 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:10:53.070076 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:10:53.097391 1433697 cri.go:89] found id: ""
	I1002 22:10:53.097418 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.097428 1433697 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:10:53.097435 1433697 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:10:53.097495 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:10:53.123533 1433697 cri.go:89] found id: ""
	I1002 22:10:53.123559 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.123567 1433697 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 22:10:53.123575 1433697 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:10:53.123638 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:10:53.149708 1433697 cri.go:89] found id: ""
	I1002 22:10:53.149732 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.149741 1433697 logs.go:284] No container was found matching "kindnet"
	I1002 22:10:53.149750 1433697 logs.go:123] Gathering logs for dmesg ...
	I1002 22:10:53.149761 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:10:53.166745 1433697 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:10:53.166776 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:10:53.238278 1433697 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 22:10:53.229939    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.230757    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.232237    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.232696    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.234220    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 22:10:53.229939    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.230757    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.232237    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.232696    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.234220    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:10:53.238300 1433697 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:10:53.238311 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:10:53.313289 1433697 logs.go:123] Gathering logs for container status ...
	I1002 22:10:53.313329 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:10:53.345040 1433697 logs.go:123] Gathering logs for kubelet ...
	I1002 22:10:53.345075 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1002 22:10:53.436481 1433697 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50152796s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000356149s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000210668s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000739305s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 22:10:53.436555 1433697 out.go:285] * 
	* 
	W1002 22:10:53.436665 1433697 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50152796s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000356149s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000210668s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000739305s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50152796s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000356149s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000210668s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000739305s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 22:10:53.436691 1433697 out.go:285] * 
	* 
	W1002 22:10:53.438957 1433697 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:10:53.445386 1433697 out.go:203] 
	W1002 22:10:53.449177 1433697 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50152796s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000356149s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000210668s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000739305s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50152796s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000356149s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000210668s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000739305s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 22:10:53.449206 1433697 out.go:285] * 
	* 
	I1002 22:10:53.452238 1433697 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-292135 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-292135 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-10-02 22:10:53.791389678 +0000 UTC m=+3919.039848866
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-flag-292135
helpers_test.go:243: (dbg) docker inspect force-systemd-flag-292135:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09d4e0caa31e339174b7fdceb6cfea3924f894ddea63c770d40a47813423e4b1",
	        "Created": "2025-10-02T22:02:32.171631341Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1434323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:02:32.241938041Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/09d4e0caa31e339174b7fdceb6cfea3924f894ddea63c770d40a47813423e4b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09d4e0caa31e339174b7fdceb6cfea3924f894ddea63c770d40a47813423e4b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/09d4e0caa31e339174b7fdceb6cfea3924f894ddea63c770d40a47813423e4b1/hosts",
	        "LogPath": "/var/lib/docker/containers/09d4e0caa31e339174b7fdceb6cfea3924f894ddea63c770d40a47813423e4b1/09d4e0caa31e339174b7fdceb6cfea3924f894ddea63c770d40a47813423e4b1-json.log",
	        "Name": "/force-systemd-flag-292135",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-292135:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-292135",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09d4e0caa31e339174b7fdceb6cfea3924f894ddea63c770d40a47813423e4b1",
	                "LowerDir": "/var/lib/docker/overlay2/86a562683dab19d1ef8ee6e1cb12048f487a3ecd286ba168b91b551cd824a7e0-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86a562683dab19d1ef8ee6e1cb12048f487a3ecd286ba168b91b551cd824a7e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86a562683dab19d1ef8ee6e1cb12048f487a3ecd286ba168b91b551cd824a7e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86a562683dab19d1ef8ee6e1cb12048f487a3ecd286ba168b91b551cd824a7e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-292135",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-292135/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-292135",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-292135",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-292135",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d3e4177b06ffad9f561a1eb6c502a4fc66c768e973f3ffcb73540a099ea807c",
	            "SandboxKey": "/var/run/docker/netns/7d3e4177b06f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34531"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34532"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34535"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34533"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34534"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-292135": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:a8:94:73:e4:3d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "66bd0105a658a085bcfca478c1ae1886a82b3f21888e99ab2f0f7cd2b84ec3e0",
	                    "EndpointID": "feb70955a4997b419fbc032f64da18379bb46f374f80a5e28ff8a2d341426b71",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-292135",
	                        "09d4e0caa31e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-292135 -n force-systemd-flag-292135
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-292135 -n force-systemd-flag-292135: exit status 6 (309.286998ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 22:10:54.102966 1444178 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-292135" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-292135 logs -n 25
helpers_test.go:260: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-198170 sudo systemctl cat kubelet --no-pager                                                     │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status docker --all --full --no-pager                                      │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat docker --no-pager                                                      │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /etc/docker/daemon.json                                                          │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo docker system info                                                                   │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cri-dockerd --version                                                                │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat containerd --no-pager                                                  │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /etc/containerd/config.toml                                                      │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo containerd config dump                                                               │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status crio --all --full --no-pager                                        │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat crio --no-pager                                                        │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo crio config                                                                          │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ delete  │ -p cilium-198170                                                                                           │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │ 02 Oct 25 22:04 UTC │
	│ start   │ -p force-systemd-env-915858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-915858  │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ force-systemd-flag-292135 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-292135 │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:04:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:04:32.430765 1440430 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:04:32.430906 1440430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:04:32.430943 1440430 out.go:374] Setting ErrFile to fd 2...
	I1002 22:04:32.430975 1440430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:04:32.431417 1440430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:04:32.431965 1440430 out.go:368] Setting JSON to false
	I1002 22:04:32.433008 1440430 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24398,"bootTime":1759418275,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:04:32.433128 1440430 start.go:140] virtualization:  
	I1002 22:04:32.436686 1440430 out.go:179] * [force-systemd-env-915858] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:04:32.438356 1440430 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:04:32.438439 1440430 notify.go:220] Checking for updates...
	I1002 22:04:32.441507 1440430 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:04:32.443075 1440430 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:04:32.444306 1440430 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:04:32.445650 1440430 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:04:32.447102 1440430 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1002 22:04:32.449338 1440430 config.go:182] Loaded profile config "force-systemd-flag-292135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:04:32.449508 1440430 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:04:32.471976 1440430 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:04:32.472116 1440430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:04:32.534405 1440430 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:04:32.524995982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:04:32.534516 1440430 docker.go:318] overlay module found
	I1002 22:04:32.536689 1440430 out.go:179] * Using the docker driver based on user configuration
	I1002 22:04:32.538601 1440430 start.go:304] selected driver: docker
	I1002 22:04:32.538620 1440430 start.go:924] validating driver "docker" against <nil>
	I1002 22:04:32.538641 1440430 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:04:32.539457 1440430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:04:32.603430 1440430 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:04:32.594571914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:04:32.603592 1440430 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 22:04:32.603814 1440430 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 22:04:32.606007 1440430 out.go:179] * Using Docker driver with root privileges
	I1002 22:04:32.608192 1440430 cni.go:84] Creating CNI manager for ""
	I1002 22:04:32.608268 1440430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:04:32.608281 1440430 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 22:04:32.608376 1440430 start.go:348] cluster config:
	{Name:force-systemd-env-915858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-915858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:04:32.611011 1440430 out.go:179] * Starting "force-systemd-env-915858" primary control-plane node in "force-systemd-env-915858" cluster
	I1002 22:04:32.613424 1440430 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:04:32.616139 1440430 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:04:32.618500 1440430 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:04:32.618552 1440430 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:04:32.618564 1440430 cache.go:58] Caching tarball of preloaded images
	I1002 22:04:32.618598 1440430 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:04:32.618659 1440430 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:04:32.618670 1440430 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:04:32.618785 1440430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/config.json ...
	I1002 22:04:32.618806 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/config.json: {Name:mk2efe2cec5cae4bbc60b7da84211fe193ee6e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:32.637824 1440430 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:04:32.637850 1440430 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:04:32.637863 1440430 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:04:32.637885 1440430 start.go:360] acquireMachinesLock for force-systemd-env-915858: {Name:mk0d075f766c12ce9735ab84d21aceef05e4cc88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:04:32.637996 1440430 start.go:364] duration metric: took 92.798µs to acquireMachinesLock for "force-systemd-env-915858"
	I1002 22:04:32.638061 1440430 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-915858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-915858 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:04:32.638137 1440430 start.go:125] createHost starting for "" (driver="docker")
	I1002 22:04:32.641282 1440430 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 22:04:32.641522 1440430 start.go:159] libmachine.API.Create for "force-systemd-env-915858" (driver="docker")
	I1002 22:04:32.641574 1440430 client.go:168] LocalClient.Create starting
	I1002 22:04:32.641659 1440430 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem
	I1002 22:04:32.641705 1440430 main.go:141] libmachine: Decoding PEM data...
	I1002 22:04:32.641722 1440430 main.go:141] libmachine: Parsing certificate...
	I1002 22:04:32.641776 1440430 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem
	I1002 22:04:32.641796 1440430 main.go:141] libmachine: Decoding PEM data...
	I1002 22:04:32.641810 1440430 main.go:141] libmachine: Parsing certificate...
	I1002 22:04:32.642216 1440430 cli_runner.go:164] Run: docker network inspect force-systemd-env-915858 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 22:04:32.657894 1440430 cli_runner.go:211] docker network inspect force-systemd-env-915858 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 22:04:32.657997 1440430 network_create.go:284] running [docker network inspect force-systemd-env-915858] to gather additional debugging logs...
	I1002 22:04:32.658017 1440430 cli_runner.go:164] Run: docker network inspect force-systemd-env-915858
	W1002 22:04:32.673545 1440430 cli_runner.go:211] docker network inspect force-systemd-env-915858 returned with exit code 1
	I1002 22:04:32.673582 1440430 network_create.go:287] error running [docker network inspect force-systemd-env-915858]: docker network inspect force-systemd-env-915858: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-915858 not found
	I1002 22:04:32.673595 1440430 network_create.go:289] output of [docker network inspect force-systemd-env-915858]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-915858 not found
	
	** /stderr **
	I1002 22:04:32.673693 1440430 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:04:32.690570 1440430 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bc3b296e1a81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:eb:59:11:9d:81} reservation:<nil>}
	I1002 22:04:32.690922 1440430 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4d7d491e9676 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:00:74:bd:3c:5f} reservation:<nil>}
	I1002 22:04:32.691299 1440430 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-314191adf21d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:ac:91:58:2a:d7} reservation:<nil>}
	I1002 22:04:32.691586 1440430 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66bd0105a658 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:2d:d0:27:f9:2a} reservation:<nil>}
	I1002 22:04:32.692034 1440430 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d9cd0}
	I1002 22:04:32.692054 1440430 network_create.go:124] attempt to create docker network force-systemd-env-915858 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 22:04:32.692108 1440430 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-915858 force-systemd-env-915858
	I1002 22:04:32.762077 1440430 network_create.go:108] docker network force-systemd-env-915858 192.168.85.0/24 created
	I1002 22:04:32.762113 1440430 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-915858" container
	I1002 22:04:32.762188 1440430 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 22:04:32.781055 1440430 cli_runner.go:164] Run: docker volume create force-systemd-env-915858 --label name.minikube.sigs.k8s.io=force-systemd-env-915858 --label created_by.minikube.sigs.k8s.io=true
	I1002 22:04:32.800385 1440430 oci.go:103] Successfully created a docker volume force-systemd-env-915858
	I1002 22:04:32.800487 1440430 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-915858-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-915858 --entrypoint /usr/bin/test -v force-systemd-env-915858:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 22:04:33.295947 1440430 oci.go:107] Successfully prepared a docker volume force-systemd-env-915858
	I1002 22:04:33.296013 1440430 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:04:33.296033 1440430 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 22:04:33.296100 1440430 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-915858:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 22:04:37.746758 1440430 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-915858:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.450608052s)
	I1002 22:04:37.746793 1440430 kic.go:203] duration metric: took 4.450756292s to extract preloaded images to volume ...
	W1002 22:04:37.746942 1440430 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 22:04:37.747055 1440430 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 22:04:37.807585 1440430 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-915858 --name force-systemd-env-915858 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-915858 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-915858 --network force-systemd-env-915858 --ip 192.168.85.2 --volume force-systemd-env-915858:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 22:04:38.152176 1440430 cli_runner.go:164] Run: docker container inspect force-systemd-env-915858 --format={{.State.Running}}
	I1002 22:04:38.174604 1440430 cli_runner.go:164] Run: docker container inspect force-systemd-env-915858 --format={{.State.Status}}
	I1002 22:04:38.203587 1440430 cli_runner.go:164] Run: docker exec force-systemd-env-915858 stat /var/lib/dpkg/alternatives/iptables
	I1002 22:04:38.254825 1440430 oci.go:144] the created container "force-systemd-env-915858" has a running status.
	I1002 22:04:38.254854 1440430 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa...
	I1002 22:04:38.498107 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 22:04:38.498214 1440430 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 22:04:38.525139 1440430 cli_runner.go:164] Run: docker container inspect force-systemd-env-915858 --format={{.State.Status}}
	I1002 22:04:38.557119 1440430 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 22:04:38.557138 1440430 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-915858 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 22:04:38.628829 1440430 cli_runner.go:164] Run: docker container inspect force-systemd-env-915858 --format={{.State.Status}}
	I1002 22:04:38.656220 1440430 machine.go:93] provisionDockerMachine start ...
	I1002 22:04:38.656305 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:38.680041 1440430 main.go:141] libmachine: Using SSH client type: native
	I1002 22:04:38.680371 1440430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34536 <nil> <nil>}
	I1002 22:04:38.680381 1440430 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:04:38.680990 1440430 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38620->127.0.0.1:34536: read: connection reset by peer
	I1002 22:04:41.813723 1440430 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-915858
	
	I1002 22:04:41.813749 1440430 ubuntu.go:182] provisioning hostname "force-systemd-env-915858"
	I1002 22:04:41.813883 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:41.831561 1440430 main.go:141] libmachine: Using SSH client type: native
	I1002 22:04:41.831881 1440430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34536 <nil> <nil>}
	I1002 22:04:41.831896 1440430 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-915858 && echo "force-systemd-env-915858" | sudo tee /etc/hostname
	I1002 22:04:41.975773 1440430 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-915858
	
	I1002 22:04:41.975865 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:41.993432 1440430 main.go:141] libmachine: Using SSH client type: native
	I1002 22:04:41.993743 1440430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34536 <nil> <nil>}
	I1002 22:04:41.993765 1440430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-915858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-915858/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-915858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:04:42.135705 1440430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:04:42.135837 1440430 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:04:42.135873 1440430 ubuntu.go:190] setting up certificates
	I1002 22:04:42.135915 1440430 provision.go:84] configureAuth start
	I1002 22:04:42.136024 1440430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-915858
	I1002 22:04:42.156541 1440430 provision.go:143] copyHostCerts
	I1002 22:04:42.156592 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:04:42.156632 1440430 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:04:42.156641 1440430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:04:42.156729 1440430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:04:42.156822 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:04:42.156842 1440430 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:04:42.156847 1440430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:04:42.156878 1440430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:04:42.156918 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:04:42.156936 1440430 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:04:42.156944 1440430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:04:42.156979 1440430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:04:42.157041 1440430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-915858 san=[127.0.0.1 192.168.85.2 force-systemd-env-915858 localhost minikube]
	I1002 22:04:42.227863 1440430 provision.go:177] copyRemoteCerts
	I1002 22:04:42.227943 1440430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:04:42.227993 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:42.249130 1440430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34536 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa Username:docker}
	I1002 22:04:42.354427 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 22:04:42.354508 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:04:42.373308 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 22:04:42.373423 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 22:04:42.392185 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 22:04:42.392247 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:04:42.410534 1440430 provision.go:87] duration metric: took 274.576822ms to configureAuth
	I1002 22:04:42.410560 1440430 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:04:42.410739 1440430 config.go:182] Loaded profile config "force-systemd-env-915858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:04:42.410854 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:42.427874 1440430 main.go:141] libmachine: Using SSH client type: native
	I1002 22:04:42.428182 1440430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34536 <nil> <nil>}
	I1002 22:04:42.428202 1440430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:04:42.690522 1440430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:04:42.690549 1440430 machine.go:96] duration metric: took 4.034309724s to provisionDockerMachine
	I1002 22:04:42.690570 1440430 client.go:171] duration metric: took 10.048974881s to LocalClient.Create
	I1002 22:04:42.690595 1440430 start.go:167] duration metric: took 10.049074218s to libmachine.API.Create "force-systemd-env-915858"
	I1002 22:04:42.690604 1440430 start.go:293] postStartSetup for "force-systemd-env-915858" (driver="docker")
	I1002 22:04:42.690615 1440430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:04:42.690697 1440430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:04:42.690773 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:42.714550 1440430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34536 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa Username:docker}
	I1002 22:04:42.810478 1440430 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:04:42.813895 1440430 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:04:42.813921 1440430 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:04:42.813933 1440430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:04:42.813988 1440430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:04:42.814102 1440430 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:04:42.814111 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> /etc/ssl/certs/12725142.pem
	I1002 22:04:42.814209 1440430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:04:42.822113 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:04:42.840047 1440430 start.go:296] duration metric: took 149.426075ms for postStartSetup
	I1002 22:04:42.840444 1440430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-915858
	I1002 22:04:42.857883 1440430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/config.json ...
	I1002 22:04:42.858230 1440430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:04:42.858285 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:42.874869 1440430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34536 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa Username:docker}
	I1002 22:04:42.967514 1440430 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:04:42.972645 1440430 start.go:128] duration metric: took 10.334490976s to createHost
	I1002 22:04:42.972670 1440430 start.go:83] releasing machines lock for "force-systemd-env-915858", held for 10.334659909s
	I1002 22:04:42.972762 1440430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-915858
	I1002 22:04:42.990447 1440430 ssh_runner.go:195] Run: cat /version.json
	I1002 22:04:42.990507 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:42.990513 1440430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:04:42.990644 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:43.020730 1440430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34536 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa Username:docker}
	I1002 22:04:43.021725 1440430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34536 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa Username:docker}
	I1002 22:04:43.117928 1440430 ssh_runner.go:195] Run: systemctl --version
	I1002 22:04:43.207039 1440430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:04:43.242705 1440430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:04:43.246878 1440430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:04:43.246982 1440430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:04:43.275057 1440430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 22:04:43.275119 1440430 start.go:495] detecting cgroup driver to use...
	I1002 22:04:43.275159 1440430 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1002 22:04:43.275235 1440430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:04:43.292413 1440430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:04:43.305271 1440430 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:04:43.305365 1440430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:04:43.323220 1440430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:04:43.340842 1440430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:04:43.449348 1440430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:04:43.566707 1440430 docker.go:234] disabling docker service ...
	I1002 22:04:43.566776 1440430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:04:43.588231 1440430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:04:43.601949 1440430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:04:43.716977 1440430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:04:43.838729 1440430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:04:43.852265 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:04:43.866523 1440430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:04:43.866648 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.876297 1440430 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 22:04:43.876395 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.885983 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.895339 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.904323 1440430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:04:43.912504 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.921098 1440430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.934869 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.943939 1440430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:04:43.951447 1440430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:04:43.958929 1440430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:04:44.077675 1440430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:04:44.214364 1440430 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:04:44.214511 1440430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:04:44.218632 1440430 start.go:563] Will wait 60s for crictl version
	I1002 22:04:44.218714 1440430 ssh_runner.go:195] Run: which crictl
	I1002 22:04:44.222355 1440430 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:04:44.246274 1440430 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:04:44.246364 1440430 ssh_runner.go:195] Run: crio --version
	I1002 22:04:44.274095 1440430 ssh_runner.go:195] Run: crio --version
	I1002 22:04:44.305379 1440430 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:04:44.308070 1440430 cli_runner.go:164] Run: docker network inspect force-systemd-env-915858 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:04:44.324500 1440430 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 22:04:44.328381 1440430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:04:44.338466 1440430 kubeadm.go:883] updating cluster {Name:force-systemd-env-915858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-915858 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:04:44.338586 1440430 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:04:44.338648 1440430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:04:44.372631 1440430 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:04:44.372655 1440430 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:04:44.372710 1440430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:04:44.398204 1440430 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:04:44.398225 1440430 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:04:44.398233 1440430 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 22:04:44.398323 1440430 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-915858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-915858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:04:44.398410 1440430 ssh_runner.go:195] Run: crio config
	I1002 22:04:44.453186 1440430 cni.go:84] Creating CNI manager for ""
	I1002 22:04:44.453210 1440430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:04:44.453229 1440430 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:04:44.453257 1440430 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-915858 NodeName:force-systemd-env-915858 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:04:44.453392 1440430 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-915858"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:04:44.453470 1440430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:04:44.461619 1440430 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:04:44.461715 1440430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:04:44.469706 1440430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1002 22:04:44.483726 1440430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:04:44.501672 1440430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1002 22:04:44.515917 1440430 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:04:44.519966 1440430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:04:44.529831 1440430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:04:44.639711 1440430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:04:44.656221 1440430 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858 for IP: 192.168.85.2
	I1002 22:04:44.656288 1440430 certs.go:195] generating shared ca certs ...
	I1002 22:04:44.656320 1440430 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:44.656499 1440430 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:04:44.656579 1440430 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:04:44.656615 1440430 certs.go:257] generating profile certs ...
	I1002 22:04:44.656697 1440430 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.key
	I1002 22:04:44.656736 1440430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.crt with IP's: []
	I1002 22:04:45.001331 1440430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.crt ...
	I1002 22:04:45.001365 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.crt: {Name:mk85382c5ebce2f456c8d7ef9968f1044df02c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.001590 1440430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.key ...
	I1002 22:04:45.001600 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.key: {Name:mk34bacf3f283a2ea6b494f3a7d56ca55d3cd26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.001694 1440430 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key.59cf65eb
	I1002 22:04:45.001708 1440430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt.59cf65eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 22:04:45.307321 1440430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt.59cf65eb ...
	I1002 22:04:45.307364 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt.59cf65eb: {Name:mk551b3cab8f5f77e75b1d1da703ba95fc9b938a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.307626 1440430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key.59cf65eb ...
	I1002 22:04:45.307647 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key.59cf65eb: {Name:mk8b0c42710d6b9c0ea10564747958515fc169ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.307785 1440430 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt.59cf65eb -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt
	I1002 22:04:45.307929 1440430 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key.59cf65eb -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key
	I1002 22:04:45.308025 1440430 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.key
	I1002 22:04:45.308075 1440430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.crt with IP's: []
	I1002 22:04:45.442362 1440430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.crt ...
	I1002 22:04:45.442395 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.crt: {Name:mk20e0be43eb9cb75dc89298cc1f6d7320873281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.442669 1440430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.key ...
	I1002 22:04:45.442691 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.key: {Name:mkda08534f96b345ac944ba639e04637787e4592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.442861 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 22:04:45.442909 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 22:04:45.442929 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 22:04:45.442946 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 22:04:45.442963 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 22:04:45.443007 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 22:04:45.443027 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 22:04:45.443042 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 22:04:45.443115 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:04:45.443178 1440430 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:04:45.443194 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:04:45.443237 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:04:45.443282 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:04:45.443313 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:04:45.443378 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:04:45.443428 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem -> /usr/share/ca-certificates/1272514.pem
	I1002 22:04:45.443448 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> /usr/share/ca-certificates/12725142.pem
	I1002 22:04:45.443467 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:04:45.444074 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:04:45.469931 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:04:45.492973 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:04:45.517640 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:04:45.536370 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 22:04:45.554052 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:04:45.571663 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:04:45.591013 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:04:45.610132 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:04:45.629523 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:04:45.647260 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:04:45.665232 1440430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:04:45.679223 1440430 ssh_runner.go:195] Run: openssl version
	I1002 22:04:45.685594 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:04:45.694187 1440430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:04:45.698145 1440430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:04:45.698209 1440430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:04:45.739203 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:04:45.747860 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:04:45.756653 1440430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:04:45.760898 1440430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:04:45.760969 1440430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:04:45.802536 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:04:45.810907 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:04:45.819303 1440430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:04:45.823169 1440430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:04:45.823245 1440430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:04:45.864349 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:04:45.872822 1440430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:04:45.876417 1440430 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 22:04:45.876469 1440430 kubeadm.go:400] StartCluster: {Name:force-systemd-env-915858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-915858 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:04:45.876547 1440430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:04:45.876613 1440430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:04:45.905931 1440430 cri.go:89] found id: ""
	I1002 22:04:45.906000 1440430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:04:45.913903 1440430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:04:45.921691 1440430 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:04:45.921813 1440430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:04:45.929694 1440430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:04:45.929713 1440430 kubeadm.go:157] found existing configuration files:
	
	I1002 22:04:45.929766 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:04:45.937349 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:04:45.937444 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:04:45.944716 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:04:45.952321 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:04:45.952409 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:04:45.959868 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:04:45.967205 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:04:45.967283 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:04:45.974415 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:04:45.981891 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:04:45.981967 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:04:45.989386 1440430 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:04:46.055779 1440430 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:04:46.056073 1440430 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:04:46.130248 1440430 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:06:48.970051 1433697 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000161419s
	I1002 22:06:48.970975 1433697 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00060891s
	I1002 22:06:48.971410 1433697 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00121086s
	I1002 22:06:48.971435 1433697 kubeadm.go:318] 
	I1002 22:06:48.971533 1433697 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 22:06:48.971624 1433697 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 22:06:48.971725 1433697 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 22:06:48.972084 1433697 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 22:06:48.972173 1433697 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 22:06:48.972256 1433697 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 22:06:48.972261 1433697 kubeadm.go:318] 
	I1002 22:06:48.976310 1433697 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:06:48.976550 1433697 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:06:48.976667 1433697 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:06:48.977294 1433697 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 22:06:48.977370 1433697 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 22:06:48.977499 1433697 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-292135 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-292135 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501701778s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000161419s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00060891s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00121086s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 22:06:48.977584 1433697 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 22:06:49.514090 1433697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:06:49.527101 1433697 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:06:49.527168 1433697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:06:49.536813 1433697 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:06:49.536837 1433697 kubeadm.go:157] found existing configuration files:
	
	I1002 22:06:49.536889 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:06:49.545354 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:06:49.545510 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:06:49.553150 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:06:49.560864 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:06:49.560926 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:06:49.573071 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:06:49.580833 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:06:49.580900 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:06:49.588277 1433697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:06:49.595558 1433697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:06:49.595668 1433697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:06:49.603171 1433697 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:06:49.670321 1433697 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:06:49.670593 1433697 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:06:49.740522 1433697 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:08:56.056566 1440430 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 22:08:56.056669 1440430 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 22:08:56.061413 1440430 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:08:56.061612 1440430 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:08:56.061722 1440430 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:08:56.061794 1440430 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:08:56.061834 1440430 kubeadm.go:318] OS: Linux
	I1002 22:08:56.061909 1440430 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:08:56.061970 1440430 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:08:56.062020 1440430 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:08:56.062122 1440430 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:08:56.062175 1440430 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:08:56.062230 1440430 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:08:56.062279 1440430 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:08:56.062340 1440430 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:08:56.062451 1440430 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:08:56.062537 1440430 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:08:56.062643 1440430 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:08:56.062743 1440430 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:08:56.062812 1440430 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:08:56.066775 1440430 out.go:252]   - Generating certificates and keys ...
	I1002 22:08:56.066889 1440430 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:08:56.066963 1440430 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:08:56.067056 1440430 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 22:08:56.067124 1440430 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 22:08:56.067193 1440430 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 22:08:56.067250 1440430 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 22:08:56.067311 1440430 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 22:08:56.067451 1440430 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-915858 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 22:08:56.067511 1440430 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 22:08:56.067646 1440430 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-915858 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 22:08:56.067718 1440430 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 22:08:56.067788 1440430 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 22:08:56.067839 1440430 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:08:56.067901 1440430 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:08:56.067958 1440430 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:08:56.068022 1440430 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:08:56.068083 1440430 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:08:56.068153 1440430 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:08:56.068214 1440430 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:08:56.068303 1440430 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:08:56.068376 1440430 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:08:56.071442 1440430 out.go:252]   - Booting up control plane ...
	I1002 22:08:56.071593 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:08:56.071702 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:08:56.071784 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:08:56.071937 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:08:56.072048 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:08:56.072187 1440430 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:08:56.072295 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:08:56.072345 1440430 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:08:56.072485 1440430 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:08:56.072599 1440430 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:08:56.072665 1440430 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.009744286s
	I1002 22:08:56.072765 1440430 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:08:56.072854 1440430 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 22:08:56.072951 1440430 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:08:56.073037 1440430 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:08:56.073122 1440430 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000128324s
	I1002 22:08:56.073203 1440430 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001043566s
	I1002 22:08:56.073292 1440430 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000910499s
	I1002 22:08:56.073301 1440430 kubeadm.go:318] 
	I1002 22:08:56.073397 1440430 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 22:08:56.073557 1440430 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 22:08:56.073671 1440430 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 22:08:56.073782 1440430 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 22:08:56.073870 1440430 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 22:08:56.073957 1440430 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 22:08:56.074081 1440430 kubeadm.go:318] 
	W1002 22:08:56.074161 1440430 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-915858 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-915858 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.009744286s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000128324s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001043566s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000910499s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 22:08:56.074248 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 22:08:56.626849 1440430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:08:56.639797 1440430 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:08:56.639864 1440430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:08:56.647978 1440430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:08:56.647995 1440430 kubeadm.go:157] found existing configuration files:
	
	I1002 22:08:56.648056 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:08:56.655974 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:08:56.656037 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:08:56.663910 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:08:56.671636 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:08:56.671754 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:08:56.678964 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:08:56.686822 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:08:56.686938 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:08:56.694507 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:08:56.702135 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:08:56.702244 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:08:56.710350 1440430 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:08:56.749404 1440430 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:08:56.749524 1440430 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:08:56.772393 1440430 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:08:56.772513 1440430 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:08:56.772576 1440430 kubeadm.go:318] OS: Linux
	I1002 22:08:56.772643 1440430 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:08:56.772720 1440430 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:08:56.772790 1440430 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:08:56.772867 1440430 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:08:56.772937 1440430 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:08:56.773015 1440430 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:08:56.773085 1440430 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:08:56.773161 1440430 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:08:56.773226 1440430 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:08:56.842884 1440430 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:08:56.843049 1440430 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:08:56.843150 1440430 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:08:56.851023 1440430 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:08:56.857843 1440430 out.go:252]   - Generating certificates and keys ...
	I1002 22:08:56.857952 1440430 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:08:56.858060 1440430 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:08:56.858205 1440430 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 22:08:56.858286 1440430 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 22:08:56.858403 1440430 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 22:08:56.858473 1440430 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 22:08:56.858549 1440430 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 22:08:56.858826 1440430 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 22:08:56.859235 1440430 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 22:08:56.859583 1440430 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 22:08:56.859908 1440430 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 22:08:56.860005 1440430 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:08:57.682312 1440430 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:08:58.082947 1440430 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:08:58.180038 1440430 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:08:59.800280 1440430 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:09:00.065631 1440430 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:09:00.083521 1440430 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:09:00.083611 1440430 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:09:00.088108 1440430 out.go:252]   - Booting up control plane ...
	I1002 22:09:00.088218 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:09:00.097928 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:09:00.098019 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:09:00.143569 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:09:00.143686 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:09:00.143797 1440430 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:09:00.143886 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:09:00.144667 1440430 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:09:00.341995 1440430 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:09:00.342675 1440430 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:09:01.346476 1440430 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001809653s
	I1002 22:09:01.349551 1440430 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:09:01.349666 1440430 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 22:09:01.349799 1440430 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:09:01.349943 1440430 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:10:52.920190 1433697 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 22:10:52.920358 1433697 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 22:10:52.923108 1433697 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:10:52.923165 1433697 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:10:52.923267 1433697 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:10:52.923324 1433697 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:10:52.923367 1433697 kubeadm.go:318] OS: Linux
	I1002 22:10:52.923443 1433697 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:10:52.923506 1433697 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:10:52.923673 1433697 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:10:52.923736 1433697 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:10:52.923793 1433697 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:10:52.923850 1433697 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:10:52.923913 1433697 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:10:52.923971 1433697 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:10:52.924049 1433697 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:10:52.924142 1433697 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:10:52.924273 1433697 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:10:52.924377 1433697 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:10:52.924450 1433697 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:10:52.930904 1433697 out.go:252]   - Generating certificates and keys ...
	I1002 22:10:52.931006 1433697 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:10:52.931082 1433697 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:10:52.931162 1433697 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 22:10:52.931230 1433697 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 22:10:52.931309 1433697 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 22:10:52.931365 1433697 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 22:10:52.931432 1433697 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 22:10:52.931496 1433697 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 22:10:52.931576 1433697 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 22:10:52.931653 1433697 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 22:10:52.931695 1433697 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 22:10:52.931753 1433697 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:10:52.931807 1433697 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:10:52.931869 1433697 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:10:52.931925 1433697 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:10:52.931991 1433697 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:10:52.932049 1433697 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:10:52.932135 1433697 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:10:52.932204 1433697 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:10:52.935194 1433697 out.go:252]   - Booting up control plane ...
	I1002 22:10:52.935325 1433697 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:10:52.935423 1433697 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:10:52.935499 1433697 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:10:52.935611 1433697 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:10:52.935710 1433697 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:10:52.935821 1433697 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:10:52.935911 1433697 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:10:52.935956 1433697 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:10:52.936095 1433697 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:10:52.936205 1433697 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:10:52.936268 1433697 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50152796s
	I1002 22:10:52.936367 1433697 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:10:52.936454 1433697 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 22:10:52.936550 1433697 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:10:52.936636 1433697 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:10:52.936718 1433697 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000356149s
	I1002 22:10:52.936804 1433697 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000210668s
	I1002 22:10:52.936887 1433697 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000739305s
	I1002 22:10:52.936897 1433697 kubeadm.go:318] 
	I1002 22:10:52.936991 1433697 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 22:10:52.937079 1433697 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 22:10:52.937179 1433697 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 22:10:52.937280 1433697 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 22:10:52.937360 1433697 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 22:10:52.937450 1433697 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 22:10:52.937516 1433697 kubeadm.go:402] duration metric: took 8m14.091729136s to StartCluster
	I1002 22:10:52.937554 1433697 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:10:52.937620 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:10:52.937711 1433697 kubeadm.go:318] 
	I1002 22:10:52.970595 1433697 cri.go:89] found id: ""
	I1002 22:10:52.970634 1433697 logs.go:282] 0 containers: []
	W1002 22:10:52.970643 1433697 logs.go:284] No container was found matching "kube-apiserver"
	I1002 22:10:52.970650 1433697 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:10:52.970715 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:10:53.010102 1433697 cri.go:89] found id: ""
	I1002 22:10:53.010127 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.010137 1433697 logs.go:284] No container was found matching "etcd"
	I1002 22:10:53.010145 1433697 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:10:53.010211 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:10:53.038224 1433697 cri.go:89] found id: ""
	I1002 22:10:53.038249 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.038258 1433697 logs.go:284] No container was found matching "coredns"
	I1002 22:10:53.038265 1433697 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:10:53.038330 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:10:53.069946 1433697 cri.go:89] found id: ""
	I1002 22:10:53.069973 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.069982 1433697 logs.go:284] No container was found matching "kube-scheduler"
	I1002 22:10:53.069989 1433697 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:10:53.070076 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:10:53.097391 1433697 cri.go:89] found id: ""
	I1002 22:10:53.097418 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.097428 1433697 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:10:53.097435 1433697 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:10:53.097495 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:10:53.123533 1433697 cri.go:89] found id: ""
	I1002 22:10:53.123559 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.123567 1433697 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 22:10:53.123575 1433697 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:10:53.123638 1433697 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:10:53.149708 1433697 cri.go:89] found id: ""
	I1002 22:10:53.149732 1433697 logs.go:282] 0 containers: []
	W1002 22:10:53.149741 1433697 logs.go:284] No container was found matching "kindnet"
	I1002 22:10:53.149750 1433697 logs.go:123] Gathering logs for dmesg ...
	I1002 22:10:53.149761 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:10:53.166745 1433697 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:10:53.166776 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:10:53.238278 1433697 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 22:10:53.229939    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.230757    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.232237    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.232696    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.234220    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 22:10:53.229939    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.230757    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.232237    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.232696    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:53.234220    2357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:10:53.238300 1433697 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:10:53.238311 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:10:53.313289 1433697 logs.go:123] Gathering logs for container status ...
	I1002 22:10:53.313329 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:10:53.345040 1433697 logs.go:123] Gathering logs for kubelet ...
	I1002 22:10:53.345075 1433697 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1002 22:10:53.436481 1433697 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50152796s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000356149s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000210668s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000739305s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 22:10:53.436555 1433697 out.go:285] * 
	W1002 22:10:53.436665 1433697 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50152796s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000356149s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000210668s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000739305s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 22:10:53.436691 1433697 out.go:285] * 
	W1002 22:10:53.438957 1433697 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:10:53.445386 1433697 out.go:203] 
	W1002 22:10:53.449177 1433697 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.50152796s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000356149s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000210668s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000739305s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 22:10:53.449206 1433697 out.go:285] * 
	I1002 22:10:53.452238 1433697 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 22:10:49 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:49.904113242Z" level=info msg="createCtr: removing container 4f763efe59d23880037fd9842902da41907a00c6833f94793244b4c78bfe85d4" id=4c3e10ec-8ddf-46cf-a98d-ffb99a0e4ce4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:49 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:49.904145668Z" level=info msg="createCtr: deleting container 4f763efe59d23880037fd9842902da41907a00c6833f94793244b4c78bfe85d4 from storage" id=4c3e10ec-8ddf-46cf-a98d-ffb99a0e4ce4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:49 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:49.906820133Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-flag-292135_kube-system_40180f071abe40ad56d5d89ebda957d9_0" id=4c3e10ec-8ddf-46cf-a98d-ffb99a0e4ce4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.881615699Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7668785a-bdc0-45c2-9c14-1050beceacc1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.882510273Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=0bcbf947-1b48-4ffe-871d-d9fe918ee620 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.88352747Z" level=info msg="Creating container: kube-system/etcd-force-systemd-flag-292135/etcd" id=d1761a65-b0bd-4e61-8e4e-2639377f8abc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.88378349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.888343953Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.88883539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.899779662Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d1761a65-b0bd-4e61-8e4e-2639377f8abc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.900930928Z" level=info msg="createCtr: deleting container ID 329902e4e68f29ecd152173c84815187acef8a6cfc2fc8c5ecdda334bb465691 from idIndex" id=d1761a65-b0bd-4e61-8e4e-2639377f8abc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.900974472Z" level=info msg="createCtr: removing container 329902e4e68f29ecd152173c84815187acef8a6cfc2fc8c5ecdda334bb465691" id=d1761a65-b0bd-4e61-8e4e-2639377f8abc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.901013191Z" level=info msg="createCtr: deleting container 329902e4e68f29ecd152173c84815187acef8a6cfc2fc8c5ecdda334bb465691 from storage" id=d1761a65-b0bd-4e61-8e4e-2639377f8abc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:52 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:52.903794444Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-flag-292135_kube-system_84cc8eff2e6e8d2452721ec28b660ecc_0" id=d1761a65-b0bd-4e61-8e4e-2639377f8abc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.88182602Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3151d8ed-301d-49f5-a870-6f7f17150844 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.882802668Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=df0b6eb0-1ee7-4861-8c50-06df2f9d929b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.883775312Z" level=info msg="Creating container: kube-system/kube-controller-manager-force-systemd-flag-292135/kube-controller-manager" id=9bff7674-01b0-4da3-9bae-c31a09df0362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.884007817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.891122107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.891703692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.903288428Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9bff7674-01b0-4da3-9bae-c31a09df0362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.904467616Z" level=info msg="createCtr: deleting container ID 1063f9e9752400a69b21d7c57f3a8214281826e480535c41eaa923634daddcfe from idIndex" id=9bff7674-01b0-4da3-9bae-c31a09df0362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.904501872Z" level=info msg="createCtr: removing container 1063f9e9752400a69b21d7c57f3a8214281826e480535c41eaa923634daddcfe" id=9bff7674-01b0-4da3-9bae-c31a09df0362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.904538737Z" level=info msg="createCtr: deleting container 1063f9e9752400a69b21d7c57f3a8214281826e480535c41eaa923634daddcfe from storage" id=9bff7674-01b0-4da3-9bae-c31a09df0362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:10:53 force-systemd-flag-292135 crio[834]: time="2025-10-02T22:10:53.915018163Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-flag-292135_kube-system_72179047380b9cf3b419ea608f1a121f_0" id=9bff7674-01b0-4da3-9bae-c31a09df0362 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 22:10:54.709626    2482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:54.710229    2482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:54.711754    2482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:54.712140    2482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:10:54.713567    2482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[  +2.995481] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:37] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 22:10:54 up  6:52,  0 user,  load average: 0.63, 1.10, 1.78
	Linux force-systemd-flag-292135 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 22:10:49 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:49.907131    1774 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 22:10:49 force-systemd-flag-292135 kubelet[1774]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:10:49 force-systemd-flag-292135 kubelet[1774]:  > podSandboxID="94c6257858336fa3780d79d2888cd29597f9a42eca5d8c97696153ef6f5cd1c1"
	Oct 02 22:10:49 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:49.907245    1774 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 22:10:49 force-systemd-flag-292135 kubelet[1774]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-flag-292135_kube-system(40180f071abe40ad56d5d89ebda957d9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:10:49 force-systemd-flag-292135 kubelet[1774]:  > logger="UnhandledError"
	Oct 02 22:10:49 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:49.907276    1774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-flag-292135" podUID="40180f071abe40ad56d5d89ebda957d9"
	Oct 02 22:10:51 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:51.459771    1774 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 22:10:52 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:52.881191    1774 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-292135\" not found" node="force-systemd-flag-292135"
	Oct 02 22:10:52 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:52.904283    1774 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 22:10:52 force-systemd-flag-292135 kubelet[1774]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:10:52 force-systemd-flag-292135 kubelet[1774]:  > podSandboxID="b3e7ae275433b40cd8251d8d3230710e44b8b81fab2221cf86311dba5d37f119"
	Oct 02 22:10:52 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:52.904373    1774 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 22:10:52 force-systemd-flag-292135 kubelet[1774]:         container etcd start failed in pod etcd-force-systemd-flag-292135_kube-system(84cc8eff2e6e8d2452721ec28b660ecc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:10:52 force-systemd-flag-292135 kubelet[1774]:  > logger="UnhandledError"
	Oct 02 22:10:52 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:52.904403    1774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-flag-292135" podUID="84cc8eff2e6e8d2452721ec28b660ecc"
	Oct 02 22:10:52 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:52.918809    1774 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-292135\" not found"
	Oct 02 22:10:53 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:53.881204    1774 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-292135\" not found" node="force-systemd-flag-292135"
	Oct 02 22:10:53 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:53.915395    1774 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 22:10:53 force-systemd-flag-292135 kubelet[1774]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:10:53 force-systemd-flag-292135 kubelet[1774]:  > podSandboxID="941214d2b9f96dc20043760cd728791648f36eac6cc98586b28fea6deb6e9e0c"
	Oct 02 22:10:53 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:53.915488    1774 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 22:10:53 force-systemd-flag-292135 kubelet[1774]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-flag-292135_kube-system(72179047380b9cf3b419ea608f1a121f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:10:53 force-systemd-flag-292135 kubelet[1774]:  > logger="UnhandledError"
	Oct 02 22:10:53 force-systemd-flag-292135 kubelet[1774]: E1002 22:10:53.915519    1774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-flag-292135" podUID="72179047380b9cf3b419ea608f1a121f"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-292135 -n force-systemd-flag-292135
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-292135 -n force-systemd-flag-292135: exit status 6 (376.563204ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 22:10:55.203344 1444393 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-292135" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-flag-292135" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-292135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-292135
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-292135: (1.901139631s)
--- FAIL: TestForceSystemdFlag (510.60s)

                                                
                                    
x
+
TestForceSystemdEnv (512.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-915858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1002 22:06:25.853809 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:09:14.582436 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-915858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m29.502219511s)

                                                
                                                
-- stdout --
	* [force-systemd-env-915858] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-915858" primary control-plane node in "force-systemd-env-915858" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:04:32.430765 1440430 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:04:32.430906 1440430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:04:32.430943 1440430 out.go:374] Setting ErrFile to fd 2...
	I1002 22:04:32.430975 1440430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:04:32.431417 1440430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:04:32.431965 1440430 out.go:368] Setting JSON to false
	I1002 22:04:32.433008 1440430 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24398,"bootTime":1759418275,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:04:32.433128 1440430 start.go:140] virtualization:  
	I1002 22:04:32.436686 1440430 out.go:179] * [force-systemd-env-915858] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:04:32.438356 1440430 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:04:32.438439 1440430 notify.go:220] Checking for updates...
	I1002 22:04:32.441507 1440430 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:04:32.443075 1440430 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:04:32.444306 1440430 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:04:32.445650 1440430 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:04:32.447102 1440430 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1002 22:04:32.449338 1440430 config.go:182] Loaded profile config "force-systemd-flag-292135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:04:32.449508 1440430 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:04:32.471976 1440430 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:04:32.472116 1440430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:04:32.534405 1440430 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:04:32.524995982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:04:32.534516 1440430 docker.go:318] overlay module found
	I1002 22:04:32.536689 1440430 out.go:179] * Using the docker driver based on user configuration
	I1002 22:04:32.538601 1440430 start.go:304] selected driver: docker
	I1002 22:04:32.538620 1440430 start.go:924] validating driver "docker" against <nil>
	I1002 22:04:32.538641 1440430 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:04:32.539457 1440430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:04:32.603430 1440430 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:04:32.594571914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:04:32.603592 1440430 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 22:04:32.603814 1440430 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 22:04:32.606007 1440430 out.go:179] * Using Docker driver with root privileges
	I1002 22:04:32.608192 1440430 cni.go:84] Creating CNI manager for ""
	I1002 22:04:32.608268 1440430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:04:32.608281 1440430 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 22:04:32.608376 1440430 start.go:348] cluster config:
	{Name:force-systemd-env-915858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-915858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:04:32.611011 1440430 out.go:179] * Starting "force-systemd-env-915858" primary control-plane node in "force-systemd-env-915858" cluster
	I1002 22:04:32.613424 1440430 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:04:32.616139 1440430 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:04:32.618500 1440430 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:04:32.618552 1440430 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:04:32.618564 1440430 cache.go:58] Caching tarball of preloaded images
	I1002 22:04:32.618598 1440430 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:04:32.618659 1440430 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:04:32.618670 1440430 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:04:32.618785 1440430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/config.json ...
	I1002 22:04:32.618806 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/config.json: {Name:mk2efe2cec5cae4bbc60b7da84211fe193ee6e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:32.637824 1440430 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:04:32.637850 1440430 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:04:32.637863 1440430 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:04:32.637885 1440430 start.go:360] acquireMachinesLock for force-systemd-env-915858: {Name:mk0d075f766c12ce9735ab84d21aceef05e4cc88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:04:32.637996 1440430 start.go:364] duration metric: took 92.798µs to acquireMachinesLock for "force-systemd-env-915858"
	I1002 22:04:32.638061 1440430 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-915858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-915858 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:04:32.638137 1440430 start.go:125] createHost starting for "" (driver="docker")
	I1002 22:04:32.641282 1440430 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 22:04:32.641522 1440430 start.go:159] libmachine.API.Create for "force-systemd-env-915858" (driver="docker")
	I1002 22:04:32.641574 1440430 client.go:168] LocalClient.Create starting
	I1002 22:04:32.641659 1440430 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem
	I1002 22:04:32.641705 1440430 main.go:141] libmachine: Decoding PEM data...
	I1002 22:04:32.641722 1440430 main.go:141] libmachine: Parsing certificate...
	I1002 22:04:32.641776 1440430 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem
	I1002 22:04:32.641796 1440430 main.go:141] libmachine: Decoding PEM data...
	I1002 22:04:32.641810 1440430 main.go:141] libmachine: Parsing certificate...
	I1002 22:04:32.642216 1440430 cli_runner.go:164] Run: docker network inspect force-systemd-env-915858 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 22:04:32.657894 1440430 cli_runner.go:211] docker network inspect force-systemd-env-915858 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 22:04:32.657997 1440430 network_create.go:284] running [docker network inspect force-systemd-env-915858] to gather additional debugging logs...
	I1002 22:04:32.658017 1440430 cli_runner.go:164] Run: docker network inspect force-systemd-env-915858
	W1002 22:04:32.673545 1440430 cli_runner.go:211] docker network inspect force-systemd-env-915858 returned with exit code 1
	I1002 22:04:32.673582 1440430 network_create.go:287] error running [docker network inspect force-systemd-env-915858]: docker network inspect force-systemd-env-915858: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-915858 not found
	I1002 22:04:32.673595 1440430 network_create.go:289] output of [docker network inspect force-systemd-env-915858]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-915858 not found
	
	** /stderr **
	I1002 22:04:32.673693 1440430 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:04:32.690570 1440430 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bc3b296e1a81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:eb:59:11:9d:81} reservation:<nil>}
	I1002 22:04:32.690922 1440430 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4d7d491e9676 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:00:74:bd:3c:5f} reservation:<nil>}
	I1002 22:04:32.691299 1440430 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-314191adf21d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:ac:91:58:2a:d7} reservation:<nil>}
	I1002 22:04:32.691586 1440430 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-66bd0105a658 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:2d:d0:27:f9:2a} reservation:<nil>}
	I1002 22:04:32.692034 1440430 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d9cd0}
	I1002 22:04:32.692054 1440430 network_create.go:124] attempt to create docker network force-systemd-env-915858 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 22:04:32.692108 1440430 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-915858 force-systemd-env-915858
	I1002 22:04:32.762077 1440430 network_create.go:108] docker network force-systemd-env-915858 192.168.85.0/24 created
	I1002 22:04:32.762113 1440430 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-915858" container
	I1002 22:04:32.762188 1440430 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 22:04:32.781055 1440430 cli_runner.go:164] Run: docker volume create force-systemd-env-915858 --label name.minikube.sigs.k8s.io=force-systemd-env-915858 --label created_by.minikube.sigs.k8s.io=true
	I1002 22:04:32.800385 1440430 oci.go:103] Successfully created a docker volume force-systemd-env-915858
	I1002 22:04:32.800487 1440430 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-915858-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-915858 --entrypoint /usr/bin/test -v force-systemd-env-915858:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 22:04:33.295947 1440430 oci.go:107] Successfully prepared a docker volume force-systemd-env-915858
	I1002 22:04:33.296013 1440430 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:04:33.296033 1440430 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 22:04:33.296100 1440430 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-915858:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 22:04:37.746758 1440430 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-915858:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.450608052s)
	I1002 22:04:37.746793 1440430 kic.go:203] duration metric: took 4.450756292s to extract preloaded images to volume ...
	W1002 22:04:37.746942 1440430 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 22:04:37.747055 1440430 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 22:04:37.807585 1440430 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-915858 --name force-systemd-env-915858 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-915858 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-915858 --network force-systemd-env-915858 --ip 192.168.85.2 --volume force-systemd-env-915858:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 22:04:38.152176 1440430 cli_runner.go:164] Run: docker container inspect force-systemd-env-915858 --format={{.State.Running}}
	I1002 22:04:38.174604 1440430 cli_runner.go:164] Run: docker container inspect force-systemd-env-915858 --format={{.State.Status}}
	I1002 22:04:38.203587 1440430 cli_runner.go:164] Run: docker exec force-systemd-env-915858 stat /var/lib/dpkg/alternatives/iptables
	I1002 22:04:38.254825 1440430 oci.go:144] the created container "force-systemd-env-915858" has a running status.
	I1002 22:04:38.254854 1440430 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa...
	I1002 22:04:38.498107 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 22:04:38.498214 1440430 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 22:04:38.525139 1440430 cli_runner.go:164] Run: docker container inspect force-systemd-env-915858 --format={{.State.Status}}
	I1002 22:04:38.557119 1440430 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 22:04:38.557138 1440430 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-915858 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 22:04:38.628829 1440430 cli_runner.go:164] Run: docker container inspect force-systemd-env-915858 --format={{.State.Status}}
	I1002 22:04:38.656220 1440430 machine.go:93] provisionDockerMachine start ...
	I1002 22:04:38.656305 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:38.680041 1440430 main.go:141] libmachine: Using SSH client type: native
	I1002 22:04:38.680371 1440430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34536 <nil> <nil>}
	I1002 22:04:38.680381 1440430 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:04:38.680990 1440430 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38620->127.0.0.1:34536: read: connection reset by peer
	I1002 22:04:41.813723 1440430 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-915858
	
	I1002 22:04:41.813749 1440430 ubuntu.go:182] provisioning hostname "force-systemd-env-915858"
	I1002 22:04:41.813883 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:41.831561 1440430 main.go:141] libmachine: Using SSH client type: native
	I1002 22:04:41.831881 1440430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34536 <nil> <nil>}
	I1002 22:04:41.831896 1440430 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-915858 && echo "force-systemd-env-915858" | sudo tee /etc/hostname
	I1002 22:04:41.975773 1440430 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-915858
	
	I1002 22:04:41.975865 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:41.993432 1440430 main.go:141] libmachine: Using SSH client type: native
	I1002 22:04:41.993743 1440430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34536 <nil> <nil>}
	I1002 22:04:41.993765 1440430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-915858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-915858/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-915858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:04:42.135705 1440430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:04:42.135837 1440430 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:04:42.135873 1440430 ubuntu.go:190] setting up certificates
	I1002 22:04:42.135915 1440430 provision.go:84] configureAuth start
	I1002 22:04:42.136024 1440430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-915858
	I1002 22:04:42.156541 1440430 provision.go:143] copyHostCerts
	I1002 22:04:42.156592 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:04:42.156632 1440430 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:04:42.156641 1440430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:04:42.156729 1440430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:04:42.156822 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:04:42.156842 1440430 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:04:42.156847 1440430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:04:42.156878 1440430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:04:42.156918 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:04:42.156936 1440430 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:04:42.156944 1440430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:04:42.156979 1440430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:04:42.157041 1440430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-915858 san=[127.0.0.1 192.168.85.2 force-systemd-env-915858 localhost minikube]
	I1002 22:04:42.227863 1440430 provision.go:177] copyRemoteCerts
	I1002 22:04:42.227943 1440430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:04:42.227993 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:42.249130 1440430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34536 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa Username:docker}
	I1002 22:04:42.354427 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 22:04:42.354508 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:04:42.373308 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 22:04:42.373423 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 22:04:42.392185 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 22:04:42.392247 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:04:42.410534 1440430 provision.go:87] duration metric: took 274.576822ms to configureAuth
	I1002 22:04:42.410560 1440430 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:04:42.410739 1440430 config.go:182] Loaded profile config "force-systemd-env-915858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:04:42.410854 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:42.427874 1440430 main.go:141] libmachine: Using SSH client type: native
	I1002 22:04:42.428182 1440430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34536 <nil> <nil>}
	I1002 22:04:42.428202 1440430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:04:42.690522 1440430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:04:42.690549 1440430 machine.go:96] duration metric: took 4.034309724s to provisionDockerMachine
	I1002 22:04:42.690570 1440430 client.go:171] duration metric: took 10.048974881s to LocalClient.Create
	I1002 22:04:42.690595 1440430 start.go:167] duration metric: took 10.049074218s to libmachine.API.Create "force-systemd-env-915858"
	I1002 22:04:42.690604 1440430 start.go:293] postStartSetup for "force-systemd-env-915858" (driver="docker")
	I1002 22:04:42.690615 1440430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:04:42.690697 1440430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:04:42.690773 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:42.714550 1440430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34536 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa Username:docker}
	I1002 22:04:42.810478 1440430 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:04:42.813895 1440430 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:04:42.813921 1440430 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:04:42.813933 1440430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:04:42.813988 1440430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:04:42.814102 1440430 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:04:42.814111 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> /etc/ssl/certs/12725142.pem
	I1002 22:04:42.814209 1440430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:04:42.822113 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:04:42.840047 1440430 start.go:296] duration metric: took 149.426075ms for postStartSetup
	I1002 22:04:42.840444 1440430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-915858
	I1002 22:04:42.857883 1440430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/config.json ...
	I1002 22:04:42.858230 1440430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:04:42.858285 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:42.874869 1440430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34536 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa Username:docker}
	I1002 22:04:42.967514 1440430 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:04:42.972645 1440430 start.go:128] duration metric: took 10.334490976s to createHost
	I1002 22:04:42.972670 1440430 start.go:83] releasing machines lock for "force-systemd-env-915858", held for 10.334659909s
	I1002 22:04:42.972762 1440430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-915858
	I1002 22:04:42.990447 1440430 ssh_runner.go:195] Run: cat /version.json
	I1002 22:04:42.990507 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:42.990513 1440430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:04:42.990644 1440430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-915858
	I1002 22:04:43.020730 1440430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34536 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa Username:docker}
	I1002 22:04:43.021725 1440430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34536 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/force-systemd-env-915858/id_rsa Username:docker}
	I1002 22:04:43.117928 1440430 ssh_runner.go:195] Run: systemctl --version
	I1002 22:04:43.207039 1440430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:04:43.242705 1440430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:04:43.246878 1440430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:04:43.246982 1440430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:04:43.275057 1440430 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 22:04:43.275119 1440430 start.go:495] detecting cgroup driver to use...
	I1002 22:04:43.275159 1440430 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1002 22:04:43.275235 1440430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:04:43.292413 1440430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:04:43.305271 1440430 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:04:43.305365 1440430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:04:43.323220 1440430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:04:43.340842 1440430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:04:43.449348 1440430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:04:43.566707 1440430 docker.go:234] disabling docker service ...
	I1002 22:04:43.566776 1440430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:04:43.588231 1440430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:04:43.601949 1440430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:04:43.716977 1440430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:04:43.838729 1440430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:04:43.852265 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:04:43.866523 1440430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:04:43.866648 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.876297 1440430 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 22:04:43.876395 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.885983 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.895339 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.904323 1440430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:04:43.912504 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.921098 1440430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.934869 1440430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:04:43.943939 1440430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:04:43.951447 1440430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:04:43.958929 1440430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:04:44.077675 1440430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:04:44.214364 1440430 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:04:44.214511 1440430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:04:44.218632 1440430 start.go:563] Will wait 60s for crictl version
	I1002 22:04:44.218714 1440430 ssh_runner.go:195] Run: which crictl
	I1002 22:04:44.222355 1440430 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:04:44.246274 1440430 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:04:44.246364 1440430 ssh_runner.go:195] Run: crio --version
	I1002 22:04:44.274095 1440430 ssh_runner.go:195] Run: crio --version
	I1002 22:04:44.305379 1440430 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:04:44.308070 1440430 cli_runner.go:164] Run: docker network inspect force-systemd-env-915858 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:04:44.324500 1440430 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 22:04:44.328381 1440430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:04:44.338466 1440430 kubeadm.go:883] updating cluster {Name:force-systemd-env-915858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-915858 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:04:44.338586 1440430 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:04:44.338648 1440430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:04:44.372631 1440430 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:04:44.372655 1440430 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:04:44.372710 1440430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:04:44.398204 1440430 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:04:44.398225 1440430 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:04:44.398233 1440430 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 22:04:44.398323 1440430 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-915858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-915858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:04:44.398410 1440430 ssh_runner.go:195] Run: crio config
	I1002 22:04:44.453186 1440430 cni.go:84] Creating CNI manager for ""
	I1002 22:04:44.453210 1440430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:04:44.453229 1440430 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:04:44.453257 1440430 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-915858 NodeName:force-systemd-env-915858 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:04:44.453392 1440430 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-915858"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:04:44.453470 1440430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:04:44.461619 1440430 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:04:44.461715 1440430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:04:44.469706 1440430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1002 22:04:44.483726 1440430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:04:44.501672 1440430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1002 22:04:44.515917 1440430 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:04:44.519966 1440430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:04:44.529831 1440430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:04:44.639711 1440430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:04:44.656221 1440430 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858 for IP: 192.168.85.2
	I1002 22:04:44.656288 1440430 certs.go:195] generating shared ca certs ...
	I1002 22:04:44.656320 1440430 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:44.656499 1440430 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:04:44.656579 1440430 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:04:44.656615 1440430 certs.go:257] generating profile certs ...
	I1002 22:04:44.656697 1440430 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.key
	I1002 22:04:44.656736 1440430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.crt with IP's: []
	I1002 22:04:45.001331 1440430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.crt ...
	I1002 22:04:45.001365 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.crt: {Name:mk85382c5ebce2f456c8d7ef9968f1044df02c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.001590 1440430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.key ...
	I1002 22:04:45.001600 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/client.key: {Name:mk34bacf3f283a2ea6b494f3a7d56ca55d3cd26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.001694 1440430 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key.59cf65eb
	I1002 22:04:45.001708 1440430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt.59cf65eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 22:04:45.307321 1440430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt.59cf65eb ...
	I1002 22:04:45.307364 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt.59cf65eb: {Name:mk551b3cab8f5f77e75b1d1da703ba95fc9b938a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.307626 1440430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key.59cf65eb ...
	I1002 22:04:45.307647 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key.59cf65eb: {Name:mk8b0c42710d6b9c0ea10564747958515fc169ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.307785 1440430 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt.59cf65eb -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt
	I1002 22:04:45.307929 1440430 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key.59cf65eb -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key
	I1002 22:04:45.308025 1440430 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.key
	I1002 22:04:45.308075 1440430 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.crt with IP's: []
	I1002 22:04:45.442362 1440430 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.crt ...
	I1002 22:04:45.442395 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.crt: {Name:mk20e0be43eb9cb75dc89298cc1f6d7320873281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.442669 1440430 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.key ...
	I1002 22:04:45.442691 1440430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.key: {Name:mkda08534f96b345ac944ba639e04637787e4592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:45.442861 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 22:04:45.442909 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 22:04:45.442929 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 22:04:45.442946 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 22:04:45.442963 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 22:04:45.443007 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 22:04:45.443027 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 22:04:45.443042 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 22:04:45.443115 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:04:45.443178 1440430 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:04:45.443194 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:04:45.443237 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:04:45.443282 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:04:45.443313 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:04:45.443378 1440430 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:04:45.443428 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem -> /usr/share/ca-certificates/1272514.pem
	I1002 22:04:45.443448 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> /usr/share/ca-certificates/12725142.pem
	I1002 22:04:45.443467 1440430 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:04:45.444074 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:04:45.469931 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:04:45.492973 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:04:45.517640 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:04:45.536370 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1002 22:04:45.554052 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:04:45.571663 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:04:45.591013 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/force-systemd-env-915858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:04:45.610132 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:04:45.629523 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:04:45.647260 1440430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:04:45.665232 1440430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:04:45.679223 1440430 ssh_runner.go:195] Run: openssl version
	I1002 22:04:45.685594 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:04:45.694187 1440430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:04:45.698145 1440430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:04:45.698209 1440430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:04:45.739203 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:04:45.747860 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:04:45.756653 1440430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:04:45.760898 1440430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:04:45.760969 1440430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:04:45.802536 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:04:45.810907 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:04:45.819303 1440430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:04:45.823169 1440430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:04:45.823245 1440430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:04:45.864349 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:04:45.872822 1440430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:04:45.876417 1440430 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 22:04:45.876469 1440430 kubeadm.go:400] StartCluster: {Name:force-systemd-env-915858 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-915858 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:04:45.876547 1440430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:04:45.876613 1440430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:04:45.905931 1440430 cri.go:89] found id: ""
	I1002 22:04:45.906000 1440430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:04:45.913903 1440430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:04:45.921691 1440430 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:04:45.921813 1440430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:04:45.929694 1440430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:04:45.929713 1440430 kubeadm.go:157] found existing configuration files:
	
	I1002 22:04:45.929766 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:04:45.937349 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:04:45.937444 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:04:45.944716 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:04:45.952321 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:04:45.952409 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:04:45.959868 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:04:45.967205 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:04:45.967283 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:04:45.974415 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:04:45.981891 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:04:45.981967 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:04:45.989386 1440430 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:04:46.055779 1440430 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:04:46.056073 1440430 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:04:46.130248 1440430 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:08:56.056566 1440430 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 22:08:56.056669 1440430 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 22:08:56.061413 1440430 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:08:56.061612 1440430 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:08:56.061722 1440430 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:08:56.061794 1440430 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:08:56.061834 1440430 kubeadm.go:318] OS: Linux
	I1002 22:08:56.061909 1440430 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:08:56.061970 1440430 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:08:56.062020 1440430 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:08:56.062122 1440430 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:08:56.062175 1440430 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:08:56.062230 1440430 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:08:56.062279 1440430 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:08:56.062340 1440430 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:08:56.062451 1440430 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:08:56.062537 1440430 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:08:56.062643 1440430 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:08:56.062743 1440430 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:08:56.062812 1440430 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:08:56.066775 1440430 out.go:252]   - Generating certificates and keys ...
	I1002 22:08:56.066889 1440430 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:08:56.066963 1440430 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:08:56.067056 1440430 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 22:08:56.067124 1440430 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 22:08:56.067193 1440430 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 22:08:56.067250 1440430 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 22:08:56.067311 1440430 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 22:08:56.067451 1440430 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-915858 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 22:08:56.067511 1440430 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 22:08:56.067646 1440430 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-915858 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 22:08:56.067718 1440430 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 22:08:56.067788 1440430 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 22:08:56.067839 1440430 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:08:56.067901 1440430 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:08:56.067958 1440430 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:08:56.068022 1440430 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:08:56.068083 1440430 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:08:56.068153 1440430 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:08:56.068214 1440430 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:08:56.068303 1440430 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:08:56.068376 1440430 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:08:56.071442 1440430 out.go:252]   - Booting up control plane ...
	I1002 22:08:56.071593 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:08:56.071702 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:08:56.071784 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:08:56.071937 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:08:56.072048 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:08:56.072187 1440430 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:08:56.072295 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:08:56.072345 1440430 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:08:56.072485 1440430 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:08:56.072599 1440430 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:08:56.072665 1440430 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.009744286s
	I1002 22:08:56.072765 1440430 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:08:56.072854 1440430 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 22:08:56.072951 1440430 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:08:56.073037 1440430 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:08:56.073122 1440430 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000128324s
	I1002 22:08:56.073203 1440430 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001043566s
	I1002 22:08:56.073292 1440430 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000910499s
	I1002 22:08:56.073301 1440430 kubeadm.go:318] 
	I1002 22:08:56.073397 1440430 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 22:08:56.073557 1440430 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 22:08:56.073671 1440430 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 22:08:56.073782 1440430 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 22:08:56.073870 1440430 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 22:08:56.073957 1440430 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 22:08:56.074081 1440430 kubeadm.go:318] 
	W1002 22:08:56.074161 1440430 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-915858 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-915858 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.009744286s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000128324s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001043566s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000910499s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-915858 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-915858 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.009744286s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000128324s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001043566s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000910499s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 22:08:56.074248 1440430 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 22:08:56.626849 1440430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:08:56.639797 1440430 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:08:56.639864 1440430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:08:56.647978 1440430 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:08:56.647995 1440430 kubeadm.go:157] found existing configuration files:
	
	I1002 22:08:56.648056 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:08:56.655974 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:08:56.656037 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:08:56.663910 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:08:56.671636 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:08:56.671754 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:08:56.678964 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:08:56.686822 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:08:56.686938 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:08:56.694507 1440430 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:08:56.702135 1440430 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:08:56.702244 1440430 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:08:56.710350 1440430 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:08:56.749404 1440430 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:08:56.749524 1440430 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:08:56.772393 1440430 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:08:56.772513 1440430 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:08:56.772576 1440430 kubeadm.go:318] OS: Linux
	I1002 22:08:56.772643 1440430 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:08:56.772720 1440430 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:08:56.772790 1440430 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:08:56.772867 1440430 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:08:56.772937 1440430 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:08:56.773015 1440430 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:08:56.773085 1440430 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:08:56.773161 1440430 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:08:56.773226 1440430 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:08:56.842884 1440430 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:08:56.843049 1440430 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:08:56.843150 1440430 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:08:56.851023 1440430 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:08:56.857843 1440430 out.go:252]   - Generating certificates and keys ...
	I1002 22:08:56.857952 1440430 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:08:56.858060 1440430 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:08:56.858205 1440430 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 22:08:56.858286 1440430 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 22:08:56.858403 1440430 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 22:08:56.858473 1440430 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 22:08:56.858549 1440430 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 22:08:56.858826 1440430 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 22:08:56.859235 1440430 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 22:08:56.859583 1440430 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 22:08:56.859908 1440430 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 22:08:56.860005 1440430 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:08:57.682312 1440430 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:08:58.082947 1440430 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:08:58.180038 1440430 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:08:59.800280 1440430 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:09:00.065631 1440430 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:09:00.083521 1440430 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:09:00.083611 1440430 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:09:00.088108 1440430 out.go:252]   - Booting up control plane ...
	I1002 22:09:00.088218 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:09:00.097928 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:09:00.098019 1440430 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:09:00.143569 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:09:00.143686 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:09:00.143797 1440430 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:09:00.143886 1440430 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:09:00.144667 1440430 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:09:00.341995 1440430 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:09:00.342675 1440430 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:09:01.346476 1440430 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001809653s
	I1002 22:09:01.349551 1440430 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:09:01.349666 1440430 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 22:09:01.349799 1440430 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:09:01.349943 1440430 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:13:01.349712 1440430 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000122678s
	I1002 22:13:01.354998 1440430 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000208527s
	I1002 22:13:01.362317 1440430 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000828806s
	I1002 22:13:01.362424 1440430 kubeadm.go:318] 
	I1002 22:13:01.362574 1440430 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 22:13:01.362704 1440430 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 22:13:01.362831 1440430 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 22:13:01.363004 1440430 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 22:13:01.363154 1440430 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 22:13:01.363262 1440430 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 22:13:01.363269 1440430 kubeadm.go:318] 
	I1002 22:13:01.364642 1440430 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:13:01.364953 1440430 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:13:01.365114 1440430 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:13:01.365768 1440430 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 22:13:01.365871 1440430 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 22:13:01.365903 1440430 kubeadm.go:402] duration metric: took 8m15.489438301s to StartCluster
	I1002 22:13:01.365940 1440430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:13:01.366000 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:13:01.392057 1440430 cri.go:89] found id: ""
	I1002 22:13:01.392089 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.392098 1440430 logs.go:284] No container was found matching "kube-apiserver"
	I1002 22:13:01.392105 1440430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:13:01.392161 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:13:01.423335 1440430 cri.go:89] found id: ""
	I1002 22:13:01.423360 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.423369 1440430 logs.go:284] No container was found matching "etcd"
	I1002 22:13:01.423375 1440430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:13:01.423498 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:13:01.452664 1440430 cri.go:89] found id: ""
	I1002 22:13:01.452694 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.452708 1440430 logs.go:284] No container was found matching "coredns"
	I1002 22:13:01.452715 1440430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:13:01.452775 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:13:01.477610 1440430 cri.go:89] found id: ""
	I1002 22:13:01.477635 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.477643 1440430 logs.go:284] No container was found matching "kube-scheduler"
	I1002 22:13:01.477649 1440430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:13:01.477706 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:13:01.504674 1440430 cri.go:89] found id: ""
	I1002 22:13:01.504699 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.504708 1440430 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:13:01.504714 1440430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:13:01.504772 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:13:01.531051 1440430 cri.go:89] found id: ""
	I1002 22:13:01.531187 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.531209 1440430 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 22:13:01.531244 1440430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:13:01.531326 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:13:01.561737 1440430 cri.go:89] found id: ""
	I1002 22:13:01.561758 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.561766 1440430 logs.go:284] No container was found matching "kindnet"
	I1002 22:13:01.561775 1440430 logs.go:123] Gathering logs for kubelet ...
	I1002 22:13:01.561786 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:13:01.650098 1440430 logs.go:123] Gathering logs for dmesg ...
	I1002 22:13:01.650130 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:13:01.667235 1440430 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:13:01.667267 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:13:01.745871 1440430 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 22:13:01.735748    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.736433    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.738227    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.738813    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.740628    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 22:13:01.735748    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.736433    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.738227    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.738813    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.740628    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:13:01.745906 1440430 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:13:01.745920 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:13:01.822605 1440430 logs.go:123] Gathering logs for container status ...
	I1002 22:13:01.822645 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 22:13:01.855286 1440430 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001809653s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000122678s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000208527s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000828806s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 22:13:01.855350 1440430 out.go:285] * 
	* 
	W1002 22:13:01.855432 1440430 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001809653s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000122678s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000208527s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000828806s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001809653s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000122678s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000208527s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000828806s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 22:13:01.855477 1440430 out.go:285] * 
	* 
	W1002 22:13:01.857988 1440430 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:13:01.863693 1440430 out.go:203] 
	W1002 22:13:01.866799 1440430 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001809653s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000122678s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000208527s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000828806s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001809653s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000122678s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000208527s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000828806s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 22:13:01.866899 1440430 out.go:285] * 
	* 
	I1002 22:13:01.869979 1440430 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-915858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-10-02 22:13:01.928087333 +0000 UTC m=+4047.176546530
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-env-915858
helpers_test.go:243: (dbg) docker inspect force-systemd-env-915858:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ddaed2e3ec9965c356d2f9abd192c624fd015b3a887bc138aa31b5a410382d4b",
	        "Created": "2025-10-02T22:04:37.822549205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1440824,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:04:37.894710565Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/ddaed2e3ec9965c356d2f9abd192c624fd015b3a887bc138aa31b5a410382d4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ddaed2e3ec9965c356d2f9abd192c624fd015b3a887bc138aa31b5a410382d4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/ddaed2e3ec9965c356d2f9abd192c624fd015b3a887bc138aa31b5a410382d4b/hosts",
	        "LogPath": "/var/lib/docker/containers/ddaed2e3ec9965c356d2f9abd192c624fd015b3a887bc138aa31b5a410382d4b/ddaed2e3ec9965c356d2f9abd192c624fd015b3a887bc138aa31b5a410382d4b-json.log",
	        "Name": "/force-systemd-env-915858",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-915858:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-915858",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ddaed2e3ec9965c356d2f9abd192c624fd015b3a887bc138aa31b5a410382d4b",
	                "LowerDir": "/var/lib/docker/overlay2/e60211ff11ba7eb746c5dbd1139467e919e9ff19537843c3e428bfc16fd81783-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e60211ff11ba7eb746c5dbd1139467e919e9ff19537843c3e428bfc16fd81783/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e60211ff11ba7eb746c5dbd1139467e919e9ff19537843c3e428bfc16fd81783/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e60211ff11ba7eb746c5dbd1139467e919e9ff19537843c3e428bfc16fd81783/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-915858",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-915858/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-915858",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-915858",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-915858",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "274c2031dfbb6ccd03f8edf2019380ae102139776ab33209f980c5e5770cceed",
	            "SandboxKey": "/var/run/docker/netns/274c2031dfbb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34536"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34537"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34540"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34538"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34539"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-915858": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:9d:69:45:1b:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7db13a28f32b76e45c91b2dbd012683a88f7949de4afe67a057efd37d10b4b9f",
	                    "EndpointID": "38a89fc58871eaf39620f44cd82ce8cec4ed51acdb0f35e8119bd0d3a4566748",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-915858",
	                        "ddaed2e3ec99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-915858 -n force-systemd-env-915858
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-915858 -n force-systemd-env-915858: exit status 6 (351.317084ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 22:13:02.288766 1447516 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-915858" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-915858 logs -n 25
helpers_test.go:260: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-198170 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status docker --all --full --no-pager                                      │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat docker --no-pager                                                      │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /etc/docker/daemon.json                                                          │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo docker system info                                                                   │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cri-dockerd --version                                                                │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat containerd --no-pager                                                  │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /etc/containerd/config.toml                                                      │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo containerd config dump                                                               │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status crio --all --full --no-pager                                        │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat crio --no-pager                                                        │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo crio config                                                                          │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ delete  │ -p cilium-198170                                                                                           │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │ 02 Oct 25 22:04 UTC │
	│ start   │ -p force-systemd-env-915858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-915858  │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ force-systemd-flag-292135 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-292135 │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	│ delete  │ -p force-systemd-flag-292135                                                                               │ force-systemd-flag-292135 │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-247949    │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:11 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:10:57
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:10:57.165267 1444759 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:10:57.165390 1444759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:10:57.165394 1444759 out.go:374] Setting ErrFile to fd 2...
	I1002 22:10:57.165397 1444759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:10:57.165642 1444759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:10:57.166075 1444759 out.go:368] Setting JSON to false
	I1002 22:10:57.166991 1444759 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24783,"bootTime":1759418275,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:10:57.167049 1444759 start.go:140] virtualization:  
	I1002 22:10:57.171054 1444759 out.go:179] * [cert-expiration-247949] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:10:57.176219 1444759 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:10:57.176279 1444759 notify.go:220] Checking for updates...
	I1002 22:10:57.180050 1444759 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:10:57.183597 1444759 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:10:57.186945 1444759 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:10:57.190381 1444759 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:10:57.193658 1444759 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:10:57.197396 1444759 config.go:182] Loaded profile config "force-systemd-env-915858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:10:57.197484 1444759 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:10:57.231204 1444759 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:10:57.231362 1444759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:10:57.289150 1444759 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:10:57.280170946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:10:57.289249 1444759 docker.go:318] overlay module found
	I1002 22:10:57.292782 1444759 out.go:179] * Using the docker driver based on user configuration
	I1002 22:10:57.295848 1444759 start.go:304] selected driver: docker
	I1002 22:10:57.295856 1444759 start.go:924] validating driver "docker" against <nil>
	I1002 22:10:57.295868 1444759 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:10:57.296620 1444759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:10:57.356263 1444759 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:10:57.346610982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:10:57.356400 1444759 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 22:10:57.356614 1444759 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 22:10:57.359753 1444759 out.go:179] * Using Docker driver with root privileges
	I1002 22:10:57.362880 1444759 cni.go:84] Creating CNI manager for ""
	I1002 22:10:57.362942 1444759 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:10:57.362952 1444759 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 22:10:57.363043 1444759 start.go:348] cluster config:
	{Name:cert-expiration-247949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-247949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:10:57.368237 1444759 out.go:179] * Starting "cert-expiration-247949" primary control-plane node in "cert-expiration-247949" cluster
	I1002 22:10:57.371246 1444759 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:10:57.374223 1444759 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:10:57.377191 1444759 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:10:57.377242 1444759 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:10:57.377265 1444759 cache.go:58] Caching tarball of preloaded images
	I1002 22:10:57.377313 1444759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:10:57.377353 1444759 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:10:57.377361 1444759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:10:57.377464 1444759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/config.json ...
	I1002 22:10:57.377479 1444759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/config.json: {Name:mk4f779e2ec1dbecb27d484f8264a4941d5d0e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:10:57.396480 1444759 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:10:57.396494 1444759 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:10:57.396508 1444759 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:10:57.396530 1444759 start.go:360] acquireMachinesLock for cert-expiration-247949: {Name:mk2d86ac4c57797e7b17530e8bdce2bc6b8f9b2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:10:57.396624 1444759 start.go:364] duration metric: took 79.826µs to acquireMachinesLock for "cert-expiration-247949"
	I1002 22:10:57.396649 1444759 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-247949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-247949 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:10:57.396713 1444759 start.go:125] createHost starting for "" (driver="docker")
	I1002 22:10:57.400308 1444759 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 22:10:57.400516 1444759 start.go:159] libmachine.API.Create for "cert-expiration-247949" (driver="docker")
	I1002 22:10:57.400554 1444759 client.go:168] LocalClient.Create starting
	I1002 22:10:57.400621 1444759 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem
	I1002 22:10:57.400653 1444759 main.go:141] libmachine: Decoding PEM data...
	I1002 22:10:57.400665 1444759 main.go:141] libmachine: Parsing certificate...
	I1002 22:10:57.400724 1444759 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem
	I1002 22:10:57.400744 1444759 main.go:141] libmachine: Decoding PEM data...
	I1002 22:10:57.400753 1444759 main.go:141] libmachine: Parsing certificate...
	I1002 22:10:57.401113 1444759 cli_runner.go:164] Run: docker network inspect cert-expiration-247949 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 22:10:57.416851 1444759 cli_runner.go:211] docker network inspect cert-expiration-247949 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 22:10:57.416929 1444759 network_create.go:284] running [docker network inspect cert-expiration-247949] to gather additional debugging logs...
	I1002 22:10:57.416944 1444759 cli_runner.go:164] Run: docker network inspect cert-expiration-247949
	W1002 22:10:57.436544 1444759 cli_runner.go:211] docker network inspect cert-expiration-247949 returned with exit code 1
	I1002 22:10:57.436564 1444759 network_create.go:287] error running [docker network inspect cert-expiration-247949]: docker network inspect cert-expiration-247949: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-247949 not found
	I1002 22:10:57.436575 1444759 network_create.go:289] output of [docker network inspect cert-expiration-247949]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-247949 not found
	
	** /stderr **
	I1002 22:10:57.436673 1444759 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:10:57.452856 1444759 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bc3b296e1a81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:eb:59:11:9d:81} reservation:<nil>}
	I1002 22:10:57.453185 1444759 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4d7d491e9676 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:00:74:bd:3c:5f} reservation:<nil>}
	I1002 22:10:57.453502 1444759 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-314191adf21d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:ac:91:58:2a:d7} reservation:<nil>}
	I1002 22:10:57.453937 1444759 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a53630}
	I1002 22:10:57.453952 1444759 network_create.go:124] attempt to create docker network cert-expiration-247949 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 22:10:57.454008 1444759 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-247949 cert-expiration-247949
	I1002 22:10:57.520899 1444759 network_create.go:108] docker network cert-expiration-247949 192.168.76.0/24 created
	I1002 22:10:57.520936 1444759 kic.go:121] calculated static IP "192.168.76.2" for the "cert-expiration-247949" container
	I1002 22:10:57.521011 1444759 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 22:10:57.538577 1444759 cli_runner.go:164] Run: docker volume create cert-expiration-247949 --label name.minikube.sigs.k8s.io=cert-expiration-247949 --label created_by.minikube.sigs.k8s.io=true
	I1002 22:10:57.556035 1444759 oci.go:103] Successfully created a docker volume cert-expiration-247949
	I1002 22:10:57.556113 1444759 cli_runner.go:164] Run: docker run --rm --name cert-expiration-247949-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-247949 --entrypoint /usr/bin/test -v cert-expiration-247949:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 22:10:58.035490 1444759 oci.go:107] Successfully prepared a docker volume cert-expiration-247949
	I1002 22:10:58.035546 1444759 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:10:58.035565 1444759 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 22:10:58.035640 1444759 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-247949:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 22:11:02.510460 1444759 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-247949:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.474785408s)
	I1002 22:11:02.510495 1444759 kic.go:203] duration metric: took 4.474925402s to extract preloaded images to volume ...
	W1002 22:11:02.510651 1444759 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 22:11:02.510760 1444759 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 22:11:02.562947 1444759 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-247949 --name cert-expiration-247949 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-247949 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-247949 --network cert-expiration-247949 --ip 192.168.76.2 --volume cert-expiration-247949:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 22:11:02.860118 1444759 cli_runner.go:164] Run: docker container inspect cert-expiration-247949 --format={{.State.Running}}
	I1002 22:11:02.879509 1444759 cli_runner.go:164] Run: docker container inspect cert-expiration-247949 --format={{.State.Status}}
	I1002 22:11:02.904378 1444759 cli_runner.go:164] Run: docker exec cert-expiration-247949 stat /var/lib/dpkg/alternatives/iptables
	I1002 22:11:02.958347 1444759 oci.go:144] the created container "cert-expiration-247949" has a running status.
	I1002 22:11:02.958368 1444759 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa...
	I1002 22:11:03.152009 1444759 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 22:11:03.176649 1444759 cli_runner.go:164] Run: docker container inspect cert-expiration-247949 --format={{.State.Status}}
	I1002 22:11:03.202312 1444759 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 22:11:03.202324 1444759 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-247949 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 22:11:03.246341 1444759 cli_runner.go:164] Run: docker container inspect cert-expiration-247949 --format={{.State.Status}}
	I1002 22:11:03.262960 1444759 machine.go:93] provisionDockerMachine start ...
	I1002 22:11:03.263053 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:03.285606 1444759 main.go:141] libmachine: Using SSH client type: native
	I1002 22:11:03.285929 1444759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I1002 22:11:03.285936 1444759 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:11:03.286631 1444759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38866->127.0.0.1:34541: read: connection reset by peer
	I1002 22:11:06.417802 1444759 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-247949
	
	I1002 22:11:06.417816 1444759 ubuntu.go:182] provisioning hostname "cert-expiration-247949"
	I1002 22:11:06.417879 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:06.436018 1444759 main.go:141] libmachine: Using SSH client type: native
	I1002 22:11:06.436316 1444759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I1002 22:11:06.436325 1444759 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-247949 && echo "cert-expiration-247949" | sudo tee /etc/hostname
	I1002 22:11:06.579187 1444759 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-247949
	
	I1002 22:11:06.579265 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:06.596551 1444759 main.go:141] libmachine: Using SSH client type: native
	I1002 22:11:06.596848 1444759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I1002 22:11:06.596862 1444759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-247949' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-247949/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-247949' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:11:06.726472 1444759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:11:06.726490 1444759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:11:06.726509 1444759 ubuntu.go:190] setting up certificates
	I1002 22:11:06.726518 1444759 provision.go:84] configureAuth start
	I1002 22:11:06.726581 1444759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-247949
	I1002 22:11:06.743737 1444759 provision.go:143] copyHostCerts
	I1002 22:11:06.743809 1444759 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:11:06.743817 1444759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:11:06.743896 1444759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:11:06.743986 1444759 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:11:06.743990 1444759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:11:06.744019 1444759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:11:06.744070 1444759 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:11:06.744073 1444759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:11:06.744094 1444759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:11:06.744141 1444759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-247949 san=[127.0.0.1 192.168.76.2 cert-expiration-247949 localhost minikube]
	I1002 22:11:06.970994 1444759 provision.go:177] copyRemoteCerts
	I1002 22:11:06.971042 1444759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:11:06.971079 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:07.001729 1444759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:11:07.102446 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 22:11:07.120856 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:11:07.138553 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:11:07.155740 1444759 provision.go:87] duration metric: took 429.19975ms to configureAuth
	I1002 22:11:07.155756 1444759 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:11:07.155940 1444759 config.go:182] Loaded profile config "cert-expiration-247949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:11:07.156036 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:07.173922 1444759 main.go:141] libmachine: Using SSH client type: native
	I1002 22:11:07.174240 1444759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I1002 22:11:07.174252 1444759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:11:07.416960 1444759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:11:07.416972 1444759 machine.go:96] duration metric: took 4.154000528s to provisionDockerMachine
	I1002 22:11:07.416981 1444759 client.go:171] duration metric: took 10.016422038s to LocalClient.Create
	I1002 22:11:07.416998 1444759 start.go:167] duration metric: took 10.016482845s to libmachine.API.Create "cert-expiration-247949"
	I1002 22:11:07.417004 1444759 start.go:293] postStartSetup for "cert-expiration-247949" (driver="docker")
	I1002 22:11:07.417012 1444759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:11:07.417086 1444759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:11:07.417127 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:07.434478 1444759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:11:07.530158 1444759 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:11:07.533771 1444759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:11:07.533789 1444759 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:11:07.533800 1444759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:11:07.533858 1444759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:11:07.533932 1444759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:11:07.534071 1444759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:11:07.541644 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:11:07.559372 1444759 start.go:296] duration metric: took 142.354016ms for postStartSetup
	I1002 22:11:07.559748 1444759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-247949
	I1002 22:11:07.576622 1444759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/config.json ...
	I1002 22:11:07.576898 1444759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:11:07.576937 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:07.594142 1444759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:11:07.686940 1444759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:11:07.691745 1444759 start.go:128] duration metric: took 10.295018281s to createHost
	I1002 22:11:07.691759 1444759 start.go:83] releasing machines lock for "cert-expiration-247949", held for 10.29512858s
	I1002 22:11:07.691829 1444759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-247949
	I1002 22:11:07.707886 1444759 ssh_runner.go:195] Run: cat /version.json
	I1002 22:11:07.708218 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:07.708220 1444759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:11:07.708280 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:07.729197 1444759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:11:07.732460 1444759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:11:07.913861 1444759 ssh_runner.go:195] Run: systemctl --version
	I1002 22:11:07.920109 1444759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:11:07.956582 1444759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:11:07.960876 1444759 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:11:07.960938 1444759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:11:07.990403 1444759 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 22:11:07.990415 1444759 start.go:495] detecting cgroup driver to use...
	I1002 22:11:07.990446 1444759 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:11:07.990493 1444759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:11:08.008742 1444759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:11:08.024090 1444759 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:11:08.024155 1444759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:11:08.043471 1444759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:11:08.063388 1444759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:11:08.180181 1444759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:11:08.309144 1444759 docker.go:234] disabling docker service ...
	I1002 22:11:08.309203 1444759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:11:08.331528 1444759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:11:08.344274 1444759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:11:08.475632 1444759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:11:08.590161 1444759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:11:08.603319 1444759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:11:08.617524 1444759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:11:08.617584 1444759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:11:08.626590 1444759 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:11:08.626664 1444759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:11:08.636003 1444759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:11:08.644809 1444759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:11:08.655310 1444759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:11:08.663977 1444759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:11:08.673050 1444759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:11:08.686623 1444759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:11:08.695555 1444759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:11:08.703228 1444759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:11:08.710817 1444759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:11:08.835914 1444759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:11:08.961572 1444759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:11:08.961633 1444759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:11:08.965549 1444759 start.go:563] Will wait 60s for crictl version
	I1002 22:11:08.965602 1444759 ssh_runner.go:195] Run: which crictl
	I1002 22:11:08.970084 1444759 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:11:09.011275 1444759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:11:09.011366 1444759 ssh_runner.go:195] Run: crio --version
	I1002 22:11:09.041239 1444759 ssh_runner.go:195] Run: crio --version
	I1002 22:11:09.078436 1444759 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:11:09.081368 1444759 cli_runner.go:164] Run: docker network inspect cert-expiration-247949 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:11:09.096671 1444759 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:11:09.100249 1444759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:11:09.109975 1444759 kubeadm.go:883] updating cluster {Name:cert-expiration-247949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-247949 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:11:09.110101 1444759 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:11:09.110158 1444759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:11:09.143175 1444759 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:11:09.143187 1444759 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:11:09.143252 1444759 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:11:09.171017 1444759 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:11:09.171028 1444759 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:11:09.171035 1444759 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 22:11:09.171130 1444759 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-247949 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-247949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:11:09.171212 1444759 ssh_runner.go:195] Run: crio config
	I1002 22:11:09.224383 1444759 cni.go:84] Creating CNI manager for ""
	I1002 22:11:09.224395 1444759 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:11:09.224411 1444759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:11:09.224435 1444759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-247949 NodeName:cert-expiration-247949 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:11:09.224548 1444759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-247949"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:11:09.224617 1444759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:11:09.232457 1444759 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:11:09.232520 1444759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:11:09.240209 1444759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 22:11:09.254562 1444759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:11:09.267307 1444759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1002 22:11:09.280064 1444759 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:11:09.283483 1444759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:11:09.293388 1444759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:11:09.400887 1444759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:11:09.417977 1444759 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949 for IP: 192.168.76.2
	I1002 22:11:09.417988 1444759 certs.go:195] generating shared ca certs ...
	I1002 22:11:09.418002 1444759 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:11:09.418191 1444759 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:11:09.418227 1444759 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:11:09.418233 1444759 certs.go:257] generating profile certs ...
	I1002 22:11:09.418301 1444759 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.key
	I1002 22:11:09.418311 1444759 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.crt with IP's: []
	I1002 22:11:09.698480 1444759 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.crt ...
	I1002 22:11:09.698497 1444759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.crt: {Name:mk949fbce5691824432b6fbe669eb2de4be7c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:11:09.698711 1444759 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.key ...
	I1002 22:11:09.698721 1444759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.key: {Name:mk6e9f2c97446e1e12f13ef33a2b7cf688f68790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:11:09.698818 1444759 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key.d7aa8472
	I1002 22:11:09.698831 1444759 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt.d7aa8472 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 22:11:10.471003 1444759 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt.d7aa8472 ...
	I1002 22:11:10.471018 1444759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt.d7aa8472: {Name:mk9a3a606123afb40827f98ee97ae7e6b48ab39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:11:10.471212 1444759 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key.d7aa8472 ...
	I1002 22:11:10.471219 1444759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key.d7aa8472: {Name:mk2d8fd8aa8cc0ef28f8849586874d59b755bc51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:11:10.471315 1444759 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt.d7aa8472 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt
	I1002 22:11:10.471387 1444759 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key.d7aa8472 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key
	I1002 22:11:10.471438 1444759 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.key
	I1002 22:11:10.471450 1444759 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.crt with IP's: []
	I1002 22:11:10.924580 1444759 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.crt ...
	I1002 22:11:10.924595 1444759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.crt: {Name:mk45c1889e4a25adecad200fe9310467215d1567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:11:10.924790 1444759 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.key ...
	I1002 22:11:10.924798 1444759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.key: {Name:mk93b91063bcb0bf6d118c05229c11f6a715679f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:11:10.924992 1444759 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:11:10.925028 1444759 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:11:10.925035 1444759 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:11:10.925061 1444759 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:11:10.925081 1444759 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:11:10.925103 1444759 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:11:10.925147 1444759 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:11:10.925689 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:11:10.947329 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:11:10.975212 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:11:10.992909 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:11:11.013409 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 22:11:11.033467 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:11:11.051621 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:11:11.071680 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:11:11.091515 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:11:11.111097 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:11:11.131355 1444759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:11:11.150784 1444759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:11:11.165138 1444759 ssh_runner.go:195] Run: openssl version
	I1002 22:11:11.171673 1444759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:11:11.181871 1444759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:11:11.186300 1444759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:11:11.186376 1444759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:11:11.227920 1444759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:11:11.236462 1444759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:11:11.244688 1444759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:11:11.248838 1444759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:11:11.248896 1444759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:11:11.290183 1444759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:11:11.298571 1444759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:11:11.306714 1444759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:11:11.310625 1444759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:11:11.310687 1444759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:11:11.351663 1444759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:11:11.360144 1444759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:11:11.363664 1444759 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 22:11:11.363710 1444759 kubeadm.go:400] StartCluster: {Name:cert-expiration-247949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-247949 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:11:11.363772 1444759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:11:11.363838 1444759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:11:11.390994 1444759 cri.go:89] found id: ""
	I1002 22:11:11.391090 1444759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:11:11.399323 1444759 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:11:11.407435 1444759 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:11:11.407494 1444759 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:11:11.415680 1444759 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:11:11.415689 1444759 kubeadm.go:157] found existing configuration files:
	
	I1002 22:11:11.415752 1444759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:11:11.423771 1444759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:11:11.423836 1444759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:11:11.431400 1444759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:11:11.439485 1444759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:11:11.439555 1444759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:11:11.447168 1444759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:11:11.454917 1444759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:11:11.454974 1444759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:11:11.462745 1444759 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:11:11.470609 1444759 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:11:11.470666 1444759 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:11:11.478422 1444759 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:11:11.543613 1444759 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:11:11.543891 1444759 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:11:11.621743 1444759 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:11:28.588372 1444759 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:11:28.588422 1444759 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:11:28.588512 1444759 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:11:28.588568 1444759 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:11:28.588602 1444759 kubeadm.go:318] OS: Linux
	I1002 22:11:28.588648 1444759 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:11:28.588698 1444759 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:11:28.588746 1444759 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:11:28.588795 1444759 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:11:28.588844 1444759 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:11:28.588894 1444759 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:11:28.588939 1444759 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:11:28.588989 1444759 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:11:28.589038 1444759 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:11:28.589112 1444759 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:11:28.589209 1444759 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:11:28.589301 1444759 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:11:28.589364 1444759 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:11:28.592384 1444759 out.go:252]   - Generating certificates and keys ...
	I1002 22:11:28.592467 1444759 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:11:28.592533 1444759 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:11:28.592602 1444759 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 22:11:28.592676 1444759 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 22:11:28.592739 1444759 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 22:11:28.592790 1444759 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 22:11:28.592852 1444759 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 22:11:28.592983 1444759 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-247949 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:11:28.593037 1444759 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 22:11:28.593165 1444759 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-247949 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:11:28.593233 1444759 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 22:11:28.593298 1444759 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 22:11:28.593343 1444759 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:11:28.593420 1444759 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:11:28.593479 1444759 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:11:28.593537 1444759 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:11:28.593595 1444759 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:11:28.593660 1444759 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:11:28.593716 1444759 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:11:28.593801 1444759 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:11:28.593869 1444759 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:11:28.596977 1444759 out.go:252]   - Booting up control plane ...
	I1002 22:11:28.597121 1444759 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:11:28.597233 1444759 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:11:28.597311 1444759 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:11:28.597433 1444759 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:11:28.597533 1444759 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:11:28.597679 1444759 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:11:28.597775 1444759 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:11:28.597827 1444759 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:11:28.597985 1444759 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:11:28.598142 1444759 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:11:28.598216 1444759 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001667581s
	I1002 22:11:28.598326 1444759 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:11:28.598409 1444759 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 22:11:28.598502 1444759 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:11:28.598585 1444759 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:11:28.598675 1444759 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.817705226s
	I1002 22:11:28.598746 1444759 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.03212273s
	I1002 22:11:28.598814 1444759 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00228206s
	I1002 22:11:28.598924 1444759 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 22:11:28.599053 1444759 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 22:11:28.599124 1444759 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 22:11:28.599343 1444759 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-247949 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 22:11:28.599401 1444759 kubeadm.go:318] [bootstrap-token] Using token: tpoaxi.13anfrcq07ned3fq
	I1002 22:11:28.604406 1444759 out.go:252]   - Configuring RBAC rules ...
	I1002 22:11:28.604540 1444759 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 22:11:28.604644 1444759 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 22:11:28.604826 1444759 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 22:11:28.604983 1444759 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 22:11:28.605109 1444759 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 22:11:28.605202 1444759 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 22:11:28.605349 1444759 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 22:11:28.605404 1444759 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 22:11:28.605456 1444759 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 22:11:28.605460 1444759 kubeadm.go:318] 
	I1002 22:11:28.605523 1444759 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 22:11:28.605526 1444759 kubeadm.go:318] 
	I1002 22:11:28.605618 1444759 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 22:11:28.605622 1444759 kubeadm.go:318] 
	I1002 22:11:28.605647 1444759 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 22:11:28.605711 1444759 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 22:11:28.605773 1444759 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 22:11:28.605777 1444759 kubeadm.go:318] 
	I1002 22:11:28.605841 1444759 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 22:11:28.605844 1444759 kubeadm.go:318] 
	I1002 22:11:28.605893 1444759 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 22:11:28.605896 1444759 kubeadm.go:318] 
	I1002 22:11:28.605950 1444759 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 22:11:28.606155 1444759 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 22:11:28.606234 1444759 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 22:11:28.606238 1444759 kubeadm.go:318] 
	I1002 22:11:28.606348 1444759 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 22:11:28.606462 1444759 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 22:11:28.606467 1444759 kubeadm.go:318] 
	I1002 22:11:28.606579 1444759 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token tpoaxi.13anfrcq07ned3fq \
	I1002 22:11:28.606699 1444759 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 22:11:28.606728 1444759 kubeadm.go:318] 	--control-plane 
	I1002 22:11:28.606736 1444759 kubeadm.go:318] 
	I1002 22:11:28.606834 1444759 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 22:11:28.606838 1444759 kubeadm.go:318] 
	I1002 22:11:28.606939 1444759 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token tpoaxi.13anfrcq07ned3fq \
	I1002 22:11:28.607065 1444759 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 22:11:28.607072 1444759 cni.go:84] Creating CNI manager for ""
	I1002 22:11:28.607078 1444759 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:11:28.610174 1444759 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 22:11:28.612940 1444759 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:11:28.617523 1444759 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 22:11:28.617535 1444759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 22:11:28.630456 1444759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:11:28.941458 1444759 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:11:28.941526 1444759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:11:28.941578 1444759 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-247949 minikube.k8s.io/updated_at=2025_10_02T22_11_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=cert-expiration-247949 minikube.k8s.io/primary=true
	I1002 22:11:28.958167 1444759 ops.go:34] apiserver oom_adj: -16
	I1002 22:11:29.137431 1444759 kubeadm.go:1113] duration metric: took 195.96524ms to wait for elevateKubeSystemPrivileges
	I1002 22:11:29.152721 1444759 kubeadm.go:402] duration metric: took 17.789006528s to StartCluster
	I1002 22:11:29.152752 1444759 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:11:29.152814 1444759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:11:29.153609 1444759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:11:29.153886 1444759 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:11:29.153984 1444759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:11:29.154299 1444759 config.go:182] Loaded profile config "cert-expiration-247949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:11:29.154337 1444759 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:11:29.154406 1444759 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-247949"
	I1002 22:11:29.154419 1444759 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-247949"
	I1002 22:11:29.154465 1444759 host.go:66] Checking if "cert-expiration-247949" exists ...
	I1002 22:11:29.155030 1444759 cli_runner.go:164] Run: docker container inspect cert-expiration-247949 --format={{.State.Status}}
	I1002 22:11:29.155559 1444759 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-247949"
	I1002 22:11:29.155573 1444759 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-247949"
	I1002 22:11:29.155864 1444759 cli_runner.go:164] Run: docker container inspect cert-expiration-247949 --format={{.State.Status}}
	I1002 22:11:29.158425 1444759 out.go:179] * Verifying Kubernetes components...
	I1002 22:11:29.161893 1444759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:11:29.185126 1444759 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-247949"
	I1002 22:11:29.185153 1444759 host.go:66] Checking if "cert-expiration-247949" exists ...
	I1002 22:11:29.185571 1444759 cli_runner.go:164] Run: docker container inspect cert-expiration-247949 --format={{.State.Status}}
	I1002 22:11:29.205891 1444759 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:11:29.210212 1444759 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:11:29.210224 1444759 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:11:29.210304 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:29.227223 1444759 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:11:29.227239 1444759 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:11:29.227332 1444759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:11:29.250152 1444759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:11:29.266181 1444759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:11:29.447666 1444759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:11:29.447687 1444759 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 22:11:29.465220 1444759 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:11:29.465284 1444759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:11:29.501621 1444759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:11:29.561533 1444759 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:11:29.821211 1444759 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 22:11:29.821354 1444759 api_server.go:72] duration metric: took 667.445425ms to wait for apiserver process to appear ...
	I1002 22:11:29.821368 1444759 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:11:29.821385 1444759 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:11:29.854830 1444759 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 22:11:29.858162 1444759 api_server.go:141] control plane version: v1.34.1
	I1002 22:11:29.858179 1444759 api_server.go:131] duration metric: took 36.805813ms to wait for apiserver health ...
	I1002 22:11:29.858186 1444759 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:11:29.876506 1444759 system_pods.go:59] 4 kube-system pods found
	I1002 22:11:29.876526 1444759 system_pods.go:61] "etcd-cert-expiration-247949" [d88d6bb3-5e1b-4806-8956-33106e695083] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:11:29.876533 1444759 system_pods.go:61] "kube-apiserver-cert-expiration-247949" [141d280b-7c31-4efc-adf2-eecd388295fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:11:29.876540 1444759 system_pods.go:61] "kube-controller-manager-cert-expiration-247949" [b7910ec2-474d-42f3-a532-d227ece4ba73] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:11:29.876547 1444759 system_pods.go:61] "kube-scheduler-cert-expiration-247949" [b57e525e-8949-42ce-8aa7-14923c36f575] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:11:29.876552 1444759 system_pods.go:74] duration metric: took 18.36152ms to wait for pod list to return data ...
	I1002 22:11:29.876563 1444759 kubeadm.go:586] duration metric: took 722.656654ms to wait for: map[apiserver:true system_pods:true]
	I1002 22:11:29.876574 1444759 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:11:29.880289 1444759 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:11:29.880311 1444759 node_conditions.go:123] node cpu capacity is 2
	I1002 22:11:29.880322 1444759 node_conditions.go:105] duration metric: took 3.744222ms to run NodePressure ...
	I1002 22:11:29.880334 1444759 start.go:241] waiting for startup goroutines ...
	I1002 22:11:30.171721 1444759 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1002 22:11:30.174574 1444759 addons.go:514] duration metric: took 1.020199648s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 22:11:30.325757 1444759 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-247949" context rescaled to 1 replicas
	I1002 22:11:30.325786 1444759 start.go:246] waiting for cluster config update ...
	I1002 22:11:30.325797 1444759 start.go:255] writing updated cluster config ...
	I1002 22:11:30.326147 1444759 ssh_runner.go:195] Run: rm -f paused
	I1002 22:11:30.392313 1444759 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:11:30.395614 1444759 out.go:179] * Done! kubectl is now configured to use "cert-expiration-247949" cluster and "default" namespace by default
	I1002 22:13:01.349712 1440430 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000122678s
	I1002 22:13:01.354998 1440430 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000208527s
	I1002 22:13:01.362317 1440430 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000828806s
	I1002 22:13:01.362424 1440430 kubeadm.go:318] 
	I1002 22:13:01.362574 1440430 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 22:13:01.362704 1440430 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 22:13:01.362831 1440430 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 22:13:01.363004 1440430 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 22:13:01.363154 1440430 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 22:13:01.363262 1440430 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 22:13:01.363269 1440430 kubeadm.go:318] 
	I1002 22:13:01.364642 1440430 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:13:01.364953 1440430 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:13:01.365114 1440430 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:13:01.365768 1440430 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 22:13:01.365871 1440430 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 22:13:01.365903 1440430 kubeadm.go:402] duration metric: took 8m15.489438301s to StartCluster
	I1002 22:13:01.365940 1440430 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:13:01.366000 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:13:01.392057 1440430 cri.go:89] found id: ""
	I1002 22:13:01.392089 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.392098 1440430 logs.go:284] No container was found matching "kube-apiserver"
	I1002 22:13:01.392105 1440430 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:13:01.392161 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:13:01.423335 1440430 cri.go:89] found id: ""
	I1002 22:13:01.423360 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.423369 1440430 logs.go:284] No container was found matching "etcd"
	I1002 22:13:01.423375 1440430 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:13:01.423498 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:13:01.452664 1440430 cri.go:89] found id: ""
	I1002 22:13:01.452694 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.452708 1440430 logs.go:284] No container was found matching "coredns"
	I1002 22:13:01.452715 1440430 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:13:01.452775 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:13:01.477610 1440430 cri.go:89] found id: ""
	I1002 22:13:01.477635 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.477643 1440430 logs.go:284] No container was found matching "kube-scheduler"
	I1002 22:13:01.477649 1440430 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:13:01.477706 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:13:01.504674 1440430 cri.go:89] found id: ""
	I1002 22:13:01.504699 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.504708 1440430 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:13:01.504714 1440430 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:13:01.504772 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:13:01.531051 1440430 cri.go:89] found id: ""
	I1002 22:13:01.531187 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.531209 1440430 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 22:13:01.531244 1440430 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:13:01.531326 1440430 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:13:01.561737 1440430 cri.go:89] found id: ""
	I1002 22:13:01.561758 1440430 logs.go:282] 0 containers: []
	W1002 22:13:01.561766 1440430 logs.go:284] No container was found matching "kindnet"
	I1002 22:13:01.561775 1440430 logs.go:123] Gathering logs for kubelet ...
	I1002 22:13:01.561786 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:13:01.650098 1440430 logs.go:123] Gathering logs for dmesg ...
	I1002 22:13:01.650130 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:13:01.667235 1440430 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:13:01.667267 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:13:01.745871 1440430 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 22:13:01.735748    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.736433    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.738227    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.738813    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.740628    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 22:13:01.735748    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.736433    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.738227    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.738813    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:01.740628    2368 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:13:01.745906 1440430 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:13:01.745920 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:13:01.822605 1440430 logs.go:123] Gathering logs for container status ...
	I1002 22:13:01.822645 1440430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 22:13:01.855286 1440430 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001809653s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000122678s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000208527s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000828806s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 22:13:01.855350 1440430 out.go:285] * 
	W1002 22:13:01.855432 1440430 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001809653s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000122678s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000208527s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000828806s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 22:13:01.855477 1440430 out.go:285] * 
	W1002 22:13:01.857988 1440430 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:13:01.863693 1440430 out.go:203] 
	W1002 22:13:01.866799 1440430 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001809653s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000122678s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000208527s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000828806s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 22:13:01.866899 1440430 out.go:285] * 
	I1002 22:13:01.869979 1440430 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 22:12:53 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:53.019483181Z" level=info msg="createCtr: removing container b7e380a7ca4572b9a266821dba202c56b4772d81d6b0a3bebbbe9c648d94f6ee" id=aa94ad32-6e49-4d6b-a520-65fed4aec023 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:12:53 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:53.019523919Z" level=info msg="createCtr: deleting container b7e380a7ca4572b9a266821dba202c56b4772d81d6b0a3bebbbe9c648d94f6ee from storage" id=aa94ad32-6e49-4d6b-a520-65fed4aec023 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:12:53 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:53.022532219Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-env-915858_kube-system_4168c5d5572de406a67126c0a086c12c_0" id=aa94ad32-6e49-4d6b-a520-65fed4aec023 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.96875261Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=ce3c121c-2ea6-45d2-9db3-061fcbd2c94b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.969611631Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=572770d0-724e-4547-a27a-16a59bd1837b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.970613296Z" level=info msg="Creating container: kube-system/kube-scheduler-force-systemd-env-915858/kube-scheduler" id=5f2784d2-5f19-4ad3-b3d4-9b931b972269 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.970842379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.975379793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.976012077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.986320057Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5f2784d2-5f19-4ad3-b3d4-9b931b972269 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.987450622Z" level=info msg="createCtr: deleting container ID 83eb7b6a2f64e888f4e40c55cb0c6af33c1553121e77a5a599cd1f8c5eb3981a from idIndex" id=5f2784d2-5f19-4ad3-b3d4-9b931b972269 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.987489801Z" level=info msg="createCtr: removing container 83eb7b6a2f64e888f4e40c55cb0c6af33c1553121e77a5a599cd1f8c5eb3981a" id=5f2784d2-5f19-4ad3-b3d4-9b931b972269 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.987522998Z" level=info msg="createCtr: deleting container 83eb7b6a2f64e888f4e40c55cb0c6af33c1553121e77a5a599cd1f8c5eb3981a from storage" id=5f2784d2-5f19-4ad3-b3d4-9b931b972269 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:12:55 force-systemd-env-915858 crio[838]: time="2025-10-02T22:12:55.996880151Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-env-915858_kube-system_cfa8a7b7ed199e3f3c2cb71a3ad9d23d_0" id=5f2784d2-5f19-4ad3-b3d4-9b931b972269 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:13:01 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:01.969094115Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2b834adf-e0cd-43a5-a386-cf544d4c9c44 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:13:01 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:01.970137748Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=10c1eef4-966f-4ac1-8f71-e87c577abb7b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:13:01 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:01.971119901Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-env-915858/kube-apiserver" id=e6cd4a9a-81d9-4dbf-a34a-42bc89e4967a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:13:01 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:01.971457093Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:13:01 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:01.986188085Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:13:01 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:01.986875941Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:13:02 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:02.010510482Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e6cd4a9a-81d9-4dbf-a34a-42bc89e4967a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:13:02 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:02.014445797Z" level=info msg="createCtr: deleting container ID ab2c959020de88765af1cf6a86d5514f44360342ca51413655aec7a53cc35c66 from idIndex" id=e6cd4a9a-81d9-4dbf-a34a-42bc89e4967a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:13:02 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:02.014605105Z" level=info msg="createCtr: removing container ab2c959020de88765af1cf6a86d5514f44360342ca51413655aec7a53cc35c66" id=e6cd4a9a-81d9-4dbf-a34a-42bc89e4967a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:13:02 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:02.014715446Z" level=info msg="createCtr: deleting container ab2c959020de88765af1cf6a86d5514f44360342ca51413655aec7a53cc35c66 from storage" id=e6cd4a9a-81d9-4dbf-a34a-42bc89e4967a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:13:02 force-systemd-env-915858 crio[838]: time="2025-10-02T22:13:02.024180314Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-env-915858_kube-system_ff4ba874d31d941193879299003aad8d_0" id=e6cd4a9a-81d9-4dbf-a34a-42bc89e4967a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 22:13:02.927408    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:02.928142    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:02.929720    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:02.930102    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 22:13:02.931780    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +2.995481] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:37] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 22:13:02 up  6:55,  0 user,  load average: 1.35, 1.17, 1.71
	Linux force-systemd-env-915858 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 22:12:53 force-systemd-env-915858 kubelet[1790]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-env-915858_kube-system(4168c5d5572de406a67126c0a086c12c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:12:53 force-systemd-env-915858 kubelet[1790]:  > logger="UnhandledError"
	Oct 02 22:12:53 force-systemd-env-915858 kubelet[1790]: E1002 22:12:53.023028    1790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-env-915858" podUID="4168c5d5572de406a67126c0a086c12c"
	Oct 02 22:12:55 force-systemd-env-915858 kubelet[1790]: E1002 22:12:55.968333    1790 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-915858\" not found" node="force-systemd-env-915858"
	Oct 02 22:12:55 force-systemd-env-915858 kubelet[1790]: E1002 22:12:55.997204    1790 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 22:12:55 force-systemd-env-915858 kubelet[1790]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:12:55 force-systemd-env-915858 kubelet[1790]:  > podSandboxID="6a9201b49b3af7d1b56111887b735f4bf3113ef8b961a70adff8e9789ddd3370"
	Oct 02 22:12:55 force-systemd-env-915858 kubelet[1790]: E1002 22:12:55.997573    1790 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 22:12:55 force-systemd-env-915858 kubelet[1790]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-env-915858_kube-system(cfa8a7b7ed199e3f3c2cb71a3ad9d23d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:12:55 force-systemd-env-915858 kubelet[1790]:  > logger="UnhandledError"
	Oct 02 22:12:55 force-systemd-env-915858 kubelet[1790]: E1002 22:12:55.998204    1790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-env-915858" podUID="cfa8a7b7ed199e3f3c2cb71a3ad9d23d"
	Oct 02 22:12:57 force-systemd-env-915858 kubelet[1790]: E1002 22:12:57.600334    1790 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-env-915858?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Oct 02 22:12:57 force-systemd-env-915858 kubelet[1790]: I1002 22:12:57.796676    1790 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-env-915858"
	Oct 02 22:12:57 force-systemd-env-915858 kubelet[1790]: E1002 22:12:57.797072    1790 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="force-systemd-env-915858"
	Oct 02 22:12:57 force-systemd-env-915858 kubelet[1790]: E1002 22:12:57.884522    1790 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.85.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dforce-systemd-env-915858&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 22:13:01 force-systemd-env-915858 kubelet[1790]: E1002 22:13:01.036012    1790 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-915858\" not found"
	Oct 02 22:13:01 force-systemd-env-915858 kubelet[1790]: E1002 22:13:01.968619    1790 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-915858\" not found" node="force-systemd-env-915858"
	Oct 02 22:13:02 force-systemd-env-915858 kubelet[1790]: E1002 22:13:02.024993    1790 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 22:13:02 force-systemd-env-915858 kubelet[1790]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:13:02 force-systemd-env-915858 kubelet[1790]:  > podSandboxID="b3a9dda425a0bea37300c48d46c5f9bd6799415e225feb65b077537a0a00f3a2"
	Oct 02 22:13:02 force-systemd-env-915858 kubelet[1790]: E1002 22:13:02.025112    1790 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 22:13:02 force-systemd-env-915858 kubelet[1790]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-env-915858_kube-system(ff4ba874d31d941193879299003aad8d): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 22:13:02 force-systemd-env-915858 kubelet[1790]:  > logger="UnhandledError"
	Oct 02 22:13:02 force-systemd-env-915858 kubelet[1790]: E1002 22:13:02.025147    1790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-env-915858" podUID="ff4ba874d31d941193879299003aad8d"
	Oct 02 22:13:02 force-systemd-env-915858 kubelet[1790]: E1002 22:13:02.657740    1790 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-env-915858.186acc086d434d31  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-env-915858,UID:force-systemd-env-915858,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-env-915858 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-env-915858,},FirstTimestamp:2025-10-02 22:09:01.006531889 +0000 UTC m=+0.671325176,LastTimestamp:2025-10-02 22:09:01.006531889 +0000 UTC m=+0.671325176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet
,ReportingInstance:force-systemd-env-915858,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-915858 -n force-systemd-env-915858
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-915858 -n force-systemd-env-915858: exit status 6 (334.885691ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 22:13:03.384494 1447734 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-915858" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-env-915858" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-915858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-915858
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-915858: (1.949194547s)
--- FAIL: TestForceSystemdEnv (512.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-758263 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-758263 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9hcjc" [81a5a591-6c1f-4566-a01d-d26bb8987cb3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-758263 -n functional-758263
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-02 21:26:35.415186428 +0000 UTC m=+1260.663645690
functional_test.go:1645: (dbg) Run:  kubectl --context functional-758263 describe po hello-node-connect-7d85dfc575-9hcjc -n default
functional_test.go:1645: (dbg) kubectl --context functional-758263 describe po hello-node-connect-7d85dfc575-9hcjc -n default:
Name:             hello-node-connect-7d85dfc575-9hcjc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-758263/192.168.49.2
Start Time:       Thu, 02 Oct 2025 21:16:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dp26g (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dp26g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9hcjc to functional-758263
Normal   Pulling    7m12s (x5 over 9m58s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m12s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m12s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-758263 logs hello-node-connect-7d85dfc575-9hcjc -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-758263 logs hello-node-connect-7d85dfc575-9hcjc -n default: exit status 1 (107.60284ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9hcjc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-758263 logs hello-node-connect-7d85dfc575-9hcjc -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-758263 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-9hcjc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-758263/192.168.49.2
Start Time:       Thu, 02 Oct 2025 21:16:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dp26g (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dp26g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9hcjc to functional-758263
Normal   Pulling    7m12s (x5 over 9m58s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m12s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m12s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-758263 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-758263 logs -l app=hello-node-connect: exit status 1 (94.319893ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9hcjc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-758263 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-758263 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.172.131
IPs:                      10.103.172.131
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32180/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-758263
helpers_test.go:243: (dbg) docker inspect functional-758263:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00996c49f3899573ef1804ce053ccc99dd17414cdf6055a3fa567c09e6223463",
	        "Created": "2025-10-02T21:13:56.641327646Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1288094,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:13:56.706561201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/00996c49f3899573ef1804ce053ccc99dd17414cdf6055a3fa567c09e6223463/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00996c49f3899573ef1804ce053ccc99dd17414cdf6055a3fa567c09e6223463/hostname",
	        "HostsPath": "/var/lib/docker/containers/00996c49f3899573ef1804ce053ccc99dd17414cdf6055a3fa567c09e6223463/hosts",
	        "LogPath": "/var/lib/docker/containers/00996c49f3899573ef1804ce053ccc99dd17414cdf6055a3fa567c09e6223463/00996c49f3899573ef1804ce053ccc99dd17414cdf6055a3fa567c09e6223463-json.log",
	        "Name": "/functional-758263",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-758263:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-758263",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "00996c49f3899573ef1804ce053ccc99dd17414cdf6055a3fa567c09e6223463",
	                "LowerDir": "/var/lib/docker/overlay2/807a50a78cb8922c31c81743d8d82a0cb6ea6c2977f36e148b7eea6c8c1ac1b0-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/807a50a78cb8922c31c81743d8d82a0cb6ea6c2977f36e148b7eea6c8c1ac1b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/807a50a78cb8922c31c81743d8d82a0cb6ea6c2977f36e148b7eea6c8c1ac1b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/807a50a78cb8922c31c81743d8d82a0cb6ea6c2977f36e148b7eea6c8c1ac1b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-758263",
	                "Source": "/var/lib/docker/volumes/functional-758263/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-758263",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-758263",
	                "name.minikube.sigs.k8s.io": "functional-758263",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ceb808833fbf48a57519964832821743fee11c203c3e5ea0e722bb29231d5f76",
	            "SandboxKey": "/var/run/docker/netns/ceb808833fbf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34281"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34282"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34285"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34283"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34284"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-758263": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:99:d4:b3:42:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08b7b5b8ff2390014fa419b8014073b684ca2a41bf1cecc78465a2613e390b91",
	                    "EndpointID": "1a50a5ae517b8b52a32fe5f14c7b18d829e2f940cf5ddd572a30dd5eadbdb8a4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-758263",
	                        "00996c49f389"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-758263 -n functional-758263
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-758263 logs -n 25: (1.44264943s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-758263 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:15 UTC │ 02 Oct 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 21:15 UTC │ 02 Oct 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 21:15 UTC │ 02 Oct 25 21:15 UTC │
	│ kubectl │ functional-758263 kubectl -- --context functional-758263 get pods                                                          │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:15 UTC │ 02 Oct 25 21:15 UTC │
	│ start   │ -p functional-758263 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:15 UTC │ 02 Oct 25 21:16 UTC │
	│ service │ invalid-svc -p functional-758263                                                                                           │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │                     │
	│ config  │ functional-758263 config unset cpus                                                                                        │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ cp      │ functional-758263 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ config  │ functional-758263 config get cpus                                                                                          │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │                     │
	│ config  │ functional-758263 config set cpus 2                                                                                        │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ config  │ functional-758263 config get cpus                                                                                          │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ config  │ functional-758263 config unset cpus                                                                                        │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ ssh     │ functional-758263 ssh -n functional-758263 sudo cat /home/docker/cp-test.txt                                               │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ config  │ functional-758263 config get cpus                                                                                          │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │                     │
	│ ssh     │ functional-758263 ssh echo hello                                                                                           │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ cp      │ functional-758263 cp functional-758263:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4260570615/001/cp-test.txt │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ ssh     │ functional-758263 ssh cat /etc/hostname                                                                                    │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ ssh     │ functional-758263 ssh -n functional-758263 sudo cat /home/docker/cp-test.txt                                               │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ tunnel  │ functional-758263 tunnel --alsologtostderr                                                                                 │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │                     │
	│ tunnel  │ functional-758263 tunnel --alsologtostderr                                                                                 │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │                     │
	│ cp      │ functional-758263 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ tunnel  │ functional-758263 tunnel --alsologtostderr                                                                                 │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │                     │
	│ ssh     │ functional-758263 ssh -n functional-758263 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ addons  │ functional-758263 addons list                                                                                              │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	│ addons  │ functional-758263 addons list -o json                                                                                      │ functional-758263 │ jenkins │ v1.37.0 │ 02 Oct 25 21:16 UTC │ 02 Oct 25 21:16 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:15:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:15:44.841473 1292252 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:15:44.841608 1292252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:15:44.841612 1292252 out.go:374] Setting ErrFile to fd 2...
	I1002 21:15:44.841616 1292252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:15:44.841992 1292252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:15:44.842613 1292252 out.go:368] Setting JSON to false
	I1002 21:15:44.843872 1292252 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21470,"bootTime":1759418275,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 21:15:44.843969 1292252 start.go:140] virtualization:  
	I1002 21:15:44.847592 1292252 out.go:179] * [functional-758263] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:15:44.851430 1292252 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:15:44.851507 1292252 notify.go:220] Checking for updates...
	I1002 21:15:44.857330 1292252 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:15:44.860263 1292252 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 21:15:44.863215 1292252 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 21:15:44.866386 1292252 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:15:44.869335 1292252 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:15:44.872683 1292252 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:15:44.872776 1292252 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:15:44.897184 1292252 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:15:44.897299 1292252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:15:44.968351 1292252 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 21:15:44.959249553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:15:44.968447 1292252 docker.go:318] overlay module found
	I1002 21:15:44.971528 1292252 out.go:179] * Using the docker driver based on existing profile
	I1002 21:15:44.974371 1292252 start.go:304] selected driver: docker
	I1002 21:15:44.974381 1292252 start.go:924] validating driver "docker" against &{Name:functional-758263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-758263 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:15:44.974483 1292252 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:15:44.974617 1292252 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:15:45.043728 1292252 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 21:15:45.025661878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:15:45.044260 1292252 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:15:45.044295 1292252 cni.go:84] Creating CNI manager for ""
	I1002 21:15:45.044350 1292252 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:15:45.044394 1292252 start.go:348] cluster config:
	{Name:functional-758263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-758263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:15:45.048292 1292252 out.go:179] * Starting "functional-758263" primary control-plane node in "functional-758263" cluster
	I1002 21:15:45.051428 1292252 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:15:45.056882 1292252 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:15:45.060220 1292252 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:15:45.060258 1292252 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:15:45.060373 1292252 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:15:45.060385 1292252 cache.go:58] Caching tarball of preloaded images
	I1002 21:15:45.060473 1292252 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:15:45.060483 1292252 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:15:45.060604 1292252 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/config.json ...
	I1002 21:15:45.093659 1292252 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:15:45.093677 1292252 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:15:45.093730 1292252 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:15:45.093763 1292252 start.go:360] acquireMachinesLock for functional-758263: {Name:mkc297d86cff275255578de127f3ad2fbc11792e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:15:45.093898 1292252 start.go:364] duration metric: took 104.277µs to acquireMachinesLock for "functional-758263"
	I1002 21:15:45.093929 1292252 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:15:45.093943 1292252 fix.go:54] fixHost starting: 
	I1002 21:15:45.094400 1292252 cli_runner.go:164] Run: docker container inspect functional-758263 --format={{.State.Status}}
	I1002 21:15:45.154337 1292252 fix.go:112] recreateIfNeeded on functional-758263: state=Running err=<nil>
	W1002 21:15:45.154360 1292252 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:15:45.157873 1292252 out.go:252] * Updating the running docker "functional-758263" container ...
	I1002 21:15:45.157909 1292252 machine.go:93] provisionDockerMachine start ...
	I1002 21:15:45.158138 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:15:45.194903 1292252 main.go:141] libmachine: Using SSH client type: native
	I1002 21:15:45.195287 1292252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34281 <nil> <nil>}
	I1002 21:15:45.195294 1292252 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:15:45.378114 1292252 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-758263
	
	I1002 21:15:45.378151 1292252 ubuntu.go:182] provisioning hostname "functional-758263"
	I1002 21:15:45.378253 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:15:45.403776 1292252 main.go:141] libmachine: Using SSH client type: native
	I1002 21:15:45.404113 1292252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34281 <nil> <nil>}
	I1002 21:15:45.404122 1292252 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-758263 && echo "functional-758263" | sudo tee /etc/hostname
	I1002 21:15:45.567818 1292252 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-758263
	
	I1002 21:15:45.567940 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:15:45.585603 1292252 main.go:141] libmachine: Using SSH client type: native
	I1002 21:15:45.585894 1292252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34281 <nil> <nil>}
	I1002 21:15:45.585908 1292252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-758263' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-758263/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-758263' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:15:45.718767 1292252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:15:45.718784 1292252 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 21:15:45.718810 1292252 ubuntu.go:190] setting up certificates
	I1002 21:15:45.718819 1292252 provision.go:84] configureAuth start
	I1002 21:15:45.718891 1292252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-758263
	I1002 21:15:45.737020 1292252 provision.go:143] copyHostCerts
	I1002 21:15:45.737080 1292252 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 21:15:45.737106 1292252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 21:15:45.737184 1292252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 21:15:45.737277 1292252 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 21:15:45.737281 1292252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 21:15:45.737305 1292252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 21:15:45.737352 1292252 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 21:15:45.737356 1292252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 21:15:45.737377 1292252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 21:15:45.737418 1292252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.functional-758263 san=[127.0.0.1 192.168.49.2 functional-758263 localhost minikube]
	I1002 21:15:46.046765 1292252 provision.go:177] copyRemoteCerts
	I1002 21:15:46.046828 1292252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:15:46.046867 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:15:46.065254 1292252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
	I1002 21:15:46.162097 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 21:15:46.181258 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 21:15:46.199340 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:15:46.217317 1292252 provision.go:87] duration metric: took 498.474254ms to configureAuth
	I1002 21:15:46.217333 1292252 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:15:46.217523 1292252 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:15:46.217631 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:15:46.234961 1292252 main.go:141] libmachine: Using SSH client type: native
	I1002 21:15:46.235277 1292252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34281 <nil> <nil>}
	I1002 21:15:46.235289 1292252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:15:51.601232 1292252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:15:51.601244 1292252 machine.go:96] duration metric: took 6.443328327s to provisionDockerMachine
	I1002 21:15:51.601254 1292252 start.go:293] postStartSetup for "functional-758263" (driver="docker")
	I1002 21:15:51.601265 1292252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:15:51.601349 1292252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:15:51.601390 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:15:51.619776 1292252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
	I1002 21:15:51.714363 1292252 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:15:51.717947 1292252 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:15:51.717966 1292252 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:15:51.717975 1292252 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 21:15:51.718059 1292252 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 21:15:51.718135 1292252 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 21:15:51.718208 1292252 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/test/nested/copy/1272514/hosts -> hosts in /etc/test/nested/copy/1272514
	I1002 21:15:51.718258 1292252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1272514
	I1002 21:15:51.726020 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 21:15:51.744260 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/test/nested/copy/1272514/hosts --> /etc/test/nested/copy/1272514/hosts (40 bytes)
	I1002 21:15:51.762399 1292252 start.go:296] duration metric: took 161.130201ms for postStartSetup
	I1002 21:15:51.762491 1292252 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:15:51.762543 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:15:51.780831 1292252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
	I1002 21:15:51.875665 1292252 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:15:51.881055 1292252 fix.go:56] duration metric: took 6.787113693s for fixHost
	I1002 21:15:51.881071 1292252 start.go:83] releasing machines lock for "functional-758263", held for 6.787163595s
	I1002 21:15:51.881138 1292252 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-758263
	I1002 21:15:51.904797 1292252 ssh_runner.go:195] Run: cat /version.json
	I1002 21:15:51.904838 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:15:51.905093 1292252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:15:51.905144 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:15:51.922461 1292252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
	I1002 21:15:51.931419 1292252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
	I1002 21:15:52.022563 1292252 ssh_runner.go:195] Run: systemctl --version
	I1002 21:15:52.116848 1292252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:15:52.153976 1292252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:15:52.158980 1292252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:15:52.159047 1292252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:15:52.167666 1292252 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:15:52.167681 1292252 start.go:495] detecting cgroup driver to use...
	I1002 21:15:52.167714 1292252 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 21:15:52.167761 1292252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:15:52.183719 1292252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:15:52.197601 1292252 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:15:52.197666 1292252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:15:52.213566 1292252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:15:52.227235 1292252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:15:52.362611 1292252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:15:52.500801 1292252 docker.go:234] disabling docker service ...
	I1002 21:15:52.500867 1292252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:15:52.516903 1292252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:15:52.530176 1292252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:15:52.676163 1292252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:15:52.814790 1292252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:15:52.828207 1292252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:15:52.843366 1292252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:15:52.843440 1292252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:15:52.852548 1292252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:15:52.852618 1292252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:15:52.861668 1292252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:15:52.870992 1292252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:15:52.880249 1292252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:15:52.888489 1292252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:15:52.897482 1292252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:15:52.905691 1292252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:15:52.914955 1292252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:15:52.922200 1292252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:15:52.929677 1292252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:15:53.064215 1292252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:15:53.268313 1292252 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:15:53.268372 1292252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:15:53.272660 1292252 start.go:563] Will wait 60s for crictl version
	I1002 21:15:53.272722 1292252 ssh_runner.go:195] Run: which crictl
	I1002 21:15:53.277124 1292252 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:15:53.302981 1292252 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:15:53.303049 1292252 ssh_runner.go:195] Run: crio --version
	I1002 21:15:53.333327 1292252 ssh_runner.go:195] Run: crio --version
	I1002 21:15:53.367217 1292252 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:15:53.370179 1292252 cli_runner.go:164] Run: docker network inspect functional-758263 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:15:53.389602 1292252 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:15:53.396626 1292252 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 21:15:53.399451 1292252 kubeadm.go:883] updating cluster {Name:functional-758263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-758263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:15:53.399565 1292252 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:15:53.399642 1292252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:15:53.433229 1292252 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:15:53.433242 1292252 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:15:53.433300 1292252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:15:53.460060 1292252 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:15:53.460073 1292252 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:15:53.460080 1292252 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 21:15:53.460176 1292252 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-758263 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-758263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:15:53.460258 1292252 ssh_runner.go:195] Run: crio config
	I1002 21:15:53.541827 1292252 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 21:15:53.541849 1292252 cni.go:84] Creating CNI manager for ""
	I1002 21:15:53.541857 1292252 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:15:53.541865 1292252 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:15:53.541886 1292252 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-758263 NodeName:functional-758263 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:15:53.542006 1292252 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-758263"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:15:53.542148 1292252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:15:53.554281 1292252 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:15:53.554356 1292252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:15:53.565271 1292252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 21:15:53.584082 1292252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:15:53.600160 1292252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1002 21:15:53.615092 1292252 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:15:53.619656 1292252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:15:53.762679 1292252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:15:53.776704 1292252 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263 for IP: 192.168.49.2
	I1002 21:15:53.776714 1292252 certs.go:195] generating shared ca certs ...
	I1002 21:15:53.776728 1292252 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:15:53.776877 1292252 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 21:15:53.776924 1292252 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 21:15:53.776930 1292252 certs.go:257] generating profile certs ...
	I1002 21:15:53.777015 1292252 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.key
	I1002 21:15:53.777063 1292252 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/apiserver.key.586fb326
	I1002 21:15:53.777094 1292252 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/proxy-client.key
	I1002 21:15:53.777197 1292252 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 21:15:53.777231 1292252 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 21:15:53.777238 1292252 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:15:53.777261 1292252 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 21:15:53.777283 1292252 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:15:53.777301 1292252 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 21:15:53.777342 1292252 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 21:15:53.777971 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:15:53.798024 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:15:53.817168 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:15:53.835091 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:15:53.853372 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:15:53.870891 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:15:53.888602 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:15:53.906199 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 21:15:53.923372 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 21:15:53.940746 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:15:53.958235 1292252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 21:15:53.976275 1292252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:15:53.988926 1292252 ssh_runner.go:195] Run: openssl version
	I1002 21:15:53.995662 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 21:15:54.005489 1292252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 21:15:54.011326 1292252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 21:15:54.011390 1292252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 21:15:54.053420 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:15:54.062130 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:15:54.070849 1292252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:15:54.075142 1292252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:15:54.075205 1292252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:15:54.121801 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:15:54.130014 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 21:15:54.138718 1292252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 21:15:54.142928 1292252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 21:15:54.142996 1292252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 21:15:54.184113 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 21:15:54.192432 1292252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:15:54.196526 1292252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:15:54.237629 1292252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:15:54.278569 1292252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:15:54.319719 1292252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:15:54.360830 1292252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:15:54.401955 1292252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:15:54.443122 1292252 kubeadm.go:400] StartCluster: {Name:functional-758263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-758263 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:15:54.443197 1292252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:15:54.443272 1292252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:15:54.474260 1292252 cri.go:89] found id: "9649afe1b6b38b025beb68c79c70b6998d67541c0c7b6196d99facb1ab57420a"
	I1002 21:15:54.474272 1292252 cri.go:89] found id: "1f6f48a5b1f7b3be987e37cae85b21ee4867995b17a6e5d5e07b06129e59d920"
	I1002 21:15:54.474275 1292252 cri.go:89] found id: "9e7f6089695e63d5410202135d50fe9726888f4898f664273a1013ffd6110393"
	I1002 21:15:54.474278 1292252 cri.go:89] found id: "27c369b17a31d3f181e16671b68067543ed36a61ccd50bfc1caf0af19514dc7d"
	I1002 21:15:54.474280 1292252 cri.go:89] found id: "efbeb7877342fabd1712b7e53d81d9c0b64f7ce72b093f1612b5eba567988134"
	I1002 21:15:54.474283 1292252 cri.go:89] found id: "607ebb15c572b7ce6cb71745b1dfd48219ff83d9427594c574c016048368e79d"
	I1002 21:15:54.474285 1292252 cri.go:89] found id: "dc5f5956dc79f730e1c4f1ac3a6c69f0cd05f7ae5180834110ca563aab08e8eb"
	I1002 21:15:54.474287 1292252 cri.go:89] found id: "86894ddc10cd2de727b64caf9460d09e77024b1535f8917ddc8d939641967662"
	I1002 21:15:54.474289 1292252 cri.go:89] found id: ""
	I1002 21:15:54.474338 1292252 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 21:15:54.485616 1292252 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:15:54Z" level=error msg="open /run/runc: no such file or directory"
	I1002 21:15:54.485697 1292252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:15:54.493752 1292252 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:15:54.493778 1292252 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:15:54.493842 1292252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:15:54.501338 1292252 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:15:54.501903 1292252 kubeconfig.go:125] found "functional-758263" server: "https://192.168.49.2:8441"
	I1002 21:15:54.503195 1292252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:15:54.511610 1292252 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 21:14:06.331103283 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 21:15:53.609048549 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 21:15:54.511622 1292252 kubeadm.go:1160] stopping kube-system containers ...
	I1002 21:15:54.511633 1292252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 21:15:54.511690 1292252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:15:54.540669 1292252 cri.go:89] found id: "9649afe1b6b38b025beb68c79c70b6998d67541c0c7b6196d99facb1ab57420a"
	I1002 21:15:54.540680 1292252 cri.go:89] found id: "1f6f48a5b1f7b3be987e37cae85b21ee4867995b17a6e5d5e07b06129e59d920"
	I1002 21:15:54.540684 1292252 cri.go:89] found id: "9e7f6089695e63d5410202135d50fe9726888f4898f664273a1013ffd6110393"
	I1002 21:15:54.540687 1292252 cri.go:89] found id: "27c369b17a31d3f181e16671b68067543ed36a61ccd50bfc1caf0af19514dc7d"
	I1002 21:15:54.540689 1292252 cri.go:89] found id: "efbeb7877342fabd1712b7e53d81d9c0b64f7ce72b093f1612b5eba567988134"
	I1002 21:15:54.540693 1292252 cri.go:89] found id: "607ebb15c572b7ce6cb71745b1dfd48219ff83d9427594c574c016048368e79d"
	I1002 21:15:54.540695 1292252 cri.go:89] found id: "dc5f5956dc79f730e1c4f1ac3a6c69f0cd05f7ae5180834110ca563aab08e8eb"
	I1002 21:15:54.540697 1292252 cri.go:89] found id: "86894ddc10cd2de727b64caf9460d09e77024b1535f8917ddc8d939641967662"
	I1002 21:15:54.540699 1292252 cri.go:89] found id: ""
	I1002 21:15:54.540704 1292252 cri.go:252] Stopping containers: [9649afe1b6b38b025beb68c79c70b6998d67541c0c7b6196d99facb1ab57420a 1f6f48a5b1f7b3be987e37cae85b21ee4867995b17a6e5d5e07b06129e59d920 9e7f6089695e63d5410202135d50fe9726888f4898f664273a1013ffd6110393 27c369b17a31d3f181e16671b68067543ed36a61ccd50bfc1caf0af19514dc7d efbeb7877342fabd1712b7e53d81d9c0b64f7ce72b093f1612b5eba567988134 607ebb15c572b7ce6cb71745b1dfd48219ff83d9427594c574c016048368e79d dc5f5956dc79f730e1c4f1ac3a6c69f0cd05f7ae5180834110ca563aab08e8eb 86894ddc10cd2de727b64caf9460d09e77024b1535f8917ddc8d939641967662]
	I1002 21:15:54.540757 1292252 ssh_runner.go:195] Run: which crictl
	I1002 21:15:54.544366 1292252 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 9649afe1b6b38b025beb68c79c70b6998d67541c0c7b6196d99facb1ab57420a 1f6f48a5b1f7b3be987e37cae85b21ee4867995b17a6e5d5e07b06129e59d920 9e7f6089695e63d5410202135d50fe9726888f4898f664273a1013ffd6110393 27c369b17a31d3f181e16671b68067543ed36a61ccd50bfc1caf0af19514dc7d efbeb7877342fabd1712b7e53d81d9c0b64f7ce72b093f1612b5eba567988134 607ebb15c572b7ce6cb71745b1dfd48219ff83d9427594c574c016048368e79d dc5f5956dc79f730e1c4f1ac3a6c69f0cd05f7ae5180834110ca563aab08e8eb 86894ddc10cd2de727b64caf9460d09e77024b1535f8917ddc8d939641967662
	I1002 21:15:54.617517 1292252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 21:15:54.725228 1292252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:54.733514 1292252 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 21:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 21:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  2 21:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  2 21:14 /etc/kubernetes/scheduler.conf
	
	I1002 21:15:54.733572 1292252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 21:15:54.741689 1292252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 21:15:54.749944 1292252 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:15:54.750013 1292252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:54.757782 1292252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 21:15:54.765786 1292252 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:15:54.765844 1292252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:54.774010 1292252 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 21:15:54.782141 1292252 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:15:54.782196 1292252 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:54.790072 1292252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:15:54.798377 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:15:54.847429 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:15:58.352961 1292252 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.505507906s)
	I1002 21:15:58.353030 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:15:58.573905 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:15:58.635208 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:15:58.697291 1292252 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:15:58.697358 1292252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:15:59.198384 1292252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:15:59.697468 1292252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:15:59.714498 1292252 api_server.go:72] duration metric: took 1.017237332s to wait for apiserver process to appear ...
	I1002 21:15:59.714522 1292252 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:15:59.714541 1292252 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:16:03.199517 1292252 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 21:16:03.199533 1292252 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 21:16:03.199545 1292252 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:16:03.281155 1292252 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 21:16:03.281172 1292252 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 21:16:03.281183 1292252 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:16:03.377587 1292252 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:16:03.377612 1292252 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:16:03.715128 1292252 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:16:03.725736 1292252 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:16:03.725754 1292252 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:16:04.215195 1292252 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:16:04.228390 1292252 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 21:16:04.228410 1292252 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 21:16:04.714899 1292252 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:16:04.723110 1292252 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 21:16:04.736856 1292252 api_server.go:141] control plane version: v1.34.1
	I1002 21:16:04.736873 1292252 api_server.go:131] duration metric: took 5.022344964s to wait for apiserver health ...
	I1002 21:16:04.736881 1292252 cni.go:84] Creating CNI manager for ""
	I1002 21:16:04.736886 1292252 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:16:04.741235 1292252 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 21:16:04.744379 1292252 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:16:04.748835 1292252 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 21:16:04.748846 1292252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 21:16:04.762891 1292252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:16:05.199125 1292252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:16:05.203541 1292252 system_pods.go:59] 8 kube-system pods found
	I1002 21:16:05.203567 1292252 system_pods.go:61] "coredns-66bc5c9577-pnljj" [f6568534-b383-48b3-848f-4c6d0cd686fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:16:05.203639 1292252 system_pods.go:61] "etcd-functional-758263" [31ceec32-b143-4369-b21b-1be38f3367e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:16:05.203644 1292252 system_pods.go:61] "kindnet-m8fgt" [2c302e0d-7882-4279-bb6a-e5aa514cf772] Running
	I1002 21:16:05.203655 1292252 system_pods.go:61] "kube-apiserver-functional-758263" [b2dc33e0-3c1b-4fc3-9435-333eb75aa7ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:16:05.203663 1292252 system_pods.go:61] "kube-controller-manager-functional-758263" [e8af259c-7912-4cce-a773-77336613e8a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:16:05.203668 1292252 system_pods.go:61] "kube-proxy-slrzd" [7566621e-723f-4373-8f27-349d3d32bb8a] Running
	I1002 21:16:05.203678 1292252 system_pods.go:61] "kube-scheduler-functional-758263" [69a6d78f-fb18-4c5c-8400-aaa0d2e7a72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:16:05.203681 1292252 system_pods.go:61] "storage-provisioner" [f7dadba3-b176-4cdf-bcbd-f3ed05a2b0d3] Running
	I1002 21:16:05.203686 1292252 system_pods.go:74] duration metric: took 4.551282ms to wait for pod list to return data ...
	I1002 21:16:05.203693 1292252 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:16:05.208311 1292252 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:16:05.208333 1292252 node_conditions.go:123] node cpu capacity is 2
	I1002 21:16:05.208344 1292252 node_conditions.go:105] duration metric: took 4.647386ms to run NodePressure ...
	I1002 21:16:05.208407 1292252 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 21:16:05.475757 1292252 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 21:16:05.479131 1292252 kubeadm.go:743] kubelet initialised
	I1002 21:16:05.479142 1292252 kubeadm.go:744] duration metric: took 3.372292ms waiting for restarted kubelet to initialise ...
	I1002 21:16:05.479156 1292252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:16:05.488716 1292252 ops.go:34] apiserver oom_adj: -16
	I1002 21:16:05.488728 1292252 kubeadm.go:601] duration metric: took 10.994944459s to restartPrimaryControlPlane
	I1002 21:16:05.488737 1292252 kubeadm.go:402] duration metric: took 11.045623072s to StartCluster
	I1002 21:16:05.488751 1292252 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:16:05.488829 1292252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 21:16:05.489530 1292252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:16:05.489774 1292252 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:16:05.490219 1292252 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:16:05.490236 1292252 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:16:05.490325 1292252 addons.go:69] Setting storage-provisioner=true in profile "functional-758263"
	I1002 21:16:05.490328 1292252 addons.go:69] Setting default-storageclass=true in profile "functional-758263"
	I1002 21:16:05.490338 1292252 addons.go:238] Setting addon storage-provisioner=true in "functional-758263"
	W1002 21:16:05.490343 1292252 addons.go:247] addon storage-provisioner should already be in state true
	I1002 21:16:05.490352 1292252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-758263"
	I1002 21:16:05.490365 1292252 host.go:66] Checking if "functional-758263" exists ...
	I1002 21:16:05.490676 1292252 cli_runner.go:164] Run: docker container inspect functional-758263 --format={{.State.Status}}
	I1002 21:16:05.490844 1292252 cli_runner.go:164] Run: docker container inspect functional-758263 --format={{.State.Status}}
	I1002 21:16:05.493218 1292252 out.go:179] * Verifying Kubernetes components...
	I1002 21:16:05.496767 1292252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:16:05.523033 1292252 addons.go:238] Setting addon default-storageclass=true in "functional-758263"
	W1002 21:16:05.523044 1292252 addons.go:247] addon default-storageclass should already be in state true
	I1002 21:16:05.523067 1292252 host.go:66] Checking if "functional-758263" exists ...
	I1002 21:16:05.523471 1292252 cli_runner.go:164] Run: docker container inspect functional-758263 --format={{.State.Status}}
	I1002 21:16:05.537241 1292252 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:16:05.541589 1292252 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:16:05.541601 1292252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:16:05.541667 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:16:05.576067 1292252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
	I1002 21:16:05.576623 1292252 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:16:05.576649 1292252 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:16:05.576705 1292252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:16:05.603605 1292252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
	I1002 21:16:05.722531 1292252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:16:05.731302 1292252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:16:05.743383 1292252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:16:05.922120 1292252 node_ready.go:35] waiting up to 6m0s for node "functional-758263" to be "Ready" ...
	I1002 21:16:05.926050 1292252 node_ready.go:49] node "functional-758263" is "Ready"
	I1002 21:16:05.926067 1292252 node_ready.go:38] duration metric: took 3.903843ms for node "functional-758263" to be "Ready" ...
	I1002 21:16:05.926079 1292252 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:16:05.926156 1292252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:16:06.519032 1292252 api_server.go:72] duration metric: took 1.029232272s to wait for apiserver process to appear ...
	I1002 21:16:06.519043 1292252 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:16:06.519059 1292252 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 21:16:06.522209 1292252 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1002 21:16:06.525120 1292252 addons.go:514] duration metric: took 1.034865913s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 21:16:06.528510 1292252 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 21:16:06.529580 1292252 api_server.go:141] control plane version: v1.34.1
	I1002 21:16:06.529594 1292252 api_server.go:131] duration metric: took 10.546399ms to wait for apiserver health ...
	I1002 21:16:06.529602 1292252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:16:06.532835 1292252 system_pods.go:59] 8 kube-system pods found
	I1002 21:16:06.532854 1292252 system_pods.go:61] "coredns-66bc5c9577-pnljj" [f6568534-b383-48b3-848f-4c6d0cd686fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:16:06.532862 1292252 system_pods.go:61] "etcd-functional-758263" [31ceec32-b143-4369-b21b-1be38f3367e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:16:06.532867 1292252 system_pods.go:61] "kindnet-m8fgt" [2c302e0d-7882-4279-bb6a-e5aa514cf772] Running
	I1002 21:16:06.532875 1292252 system_pods.go:61] "kube-apiserver-functional-758263" [b2dc33e0-3c1b-4fc3-9435-333eb75aa7ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:16:06.532881 1292252 system_pods.go:61] "kube-controller-manager-functional-758263" [e8af259c-7912-4cce-a773-77336613e8a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:16:06.532885 1292252 system_pods.go:61] "kube-proxy-slrzd" [7566621e-723f-4373-8f27-349d3d32bb8a] Running
	I1002 21:16:06.532896 1292252 system_pods.go:61] "kube-scheduler-functional-758263" [69a6d78f-fb18-4c5c-8400-aaa0d2e7a72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:16:06.532901 1292252 system_pods.go:61] "storage-provisioner" [f7dadba3-b176-4cdf-bcbd-f3ed05a2b0d3] Running
	I1002 21:16:06.532906 1292252 system_pods.go:74] duration metric: took 3.298842ms to wait for pod list to return data ...
	I1002 21:16:06.532913 1292252 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:16:06.535492 1292252 default_sa.go:45] found service account: "default"
	I1002 21:16:06.535506 1292252 default_sa.go:55] duration metric: took 2.587347ms for default service account to be created ...
	I1002 21:16:06.535514 1292252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:16:06.538809 1292252 system_pods.go:86] 8 kube-system pods found
	I1002 21:16:06.538828 1292252 system_pods.go:89] "coredns-66bc5c9577-pnljj" [f6568534-b383-48b3-848f-4c6d0cd686fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 21:16:06.538836 1292252 system_pods.go:89] "etcd-functional-758263" [31ceec32-b143-4369-b21b-1be38f3367e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 21:16:06.538841 1292252 system_pods.go:89] "kindnet-m8fgt" [2c302e0d-7882-4279-bb6a-e5aa514cf772] Running
	I1002 21:16:06.538846 1292252 system_pods.go:89] "kube-apiserver-functional-758263" [b2dc33e0-3c1b-4fc3-9435-333eb75aa7ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 21:16:06.538851 1292252 system_pods.go:89] "kube-controller-manager-functional-758263" [e8af259c-7912-4cce-a773-77336613e8a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 21:16:06.538855 1292252 system_pods.go:89] "kube-proxy-slrzd" [7566621e-723f-4373-8f27-349d3d32bb8a] Running
	I1002 21:16:06.538860 1292252 system_pods.go:89] "kube-scheduler-functional-758263" [69a6d78f-fb18-4c5c-8400-aaa0d2e7a72b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 21:16:06.538863 1292252 system_pods.go:89] "storage-provisioner" [f7dadba3-b176-4cdf-bcbd-f3ed05a2b0d3] Running
	I1002 21:16:06.538869 1292252 system_pods.go:126] duration metric: took 3.350443ms to wait for k8s-apps to be running ...
	I1002 21:16:06.538875 1292252 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:16:06.538934 1292252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:16:06.552624 1292252 system_svc.go:56] duration metric: took 13.737485ms WaitForService to wait for kubelet
	I1002 21:16:06.552649 1292252 kubeadm.go:586] duration metric: took 1.062849139s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:16:06.552666 1292252 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:16:06.566334 1292252 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:16:06.566349 1292252 node_conditions.go:123] node cpu capacity is 2
	I1002 21:16:06.566358 1292252 node_conditions.go:105] duration metric: took 13.686803ms to run NodePressure ...
	I1002 21:16:06.566369 1292252 start.go:241] waiting for startup goroutines ...
	I1002 21:16:06.566376 1292252 start.go:246] waiting for cluster config update ...
	I1002 21:16:06.566385 1292252 start.go:255] writing updated cluster config ...
	I1002 21:16:06.566691 1292252 ssh_runner.go:195] Run: rm -f paused
	I1002 21:16:06.570121 1292252 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:16:06.599454 1292252 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pnljj" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:16:07.606400 1292252 pod_ready.go:94] pod "coredns-66bc5c9577-pnljj" is "Ready"
	I1002 21:16:07.606416 1292252 pod_ready.go:86] duration metric: took 1.006947784s for pod "coredns-66bc5c9577-pnljj" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:16:07.609261 1292252 pod_ready.go:83] waiting for pod "etcd-functional-758263" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:16:09.614622 1292252 pod_ready.go:104] pod "etcd-functional-758263" is not "Ready", error: <nil>
	W1002 21:16:12.115208 1292252 pod_ready.go:104] pod "etcd-functional-758263" is not "Ready", error: <nil>
	I1002 21:16:14.115052 1292252 pod_ready.go:94] pod "etcd-functional-758263" is "Ready"
	I1002 21:16:14.115066 1292252 pod_ready.go:86] duration metric: took 6.505792228s for pod "etcd-functional-758263" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:16:14.117352 1292252 pod_ready.go:83] waiting for pod "kube-apiserver-functional-758263" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 21:16:16.124086 1292252 pod_ready.go:104] pod "kube-apiserver-functional-758263" is not "Ready", error: <nil>
	I1002 21:16:16.622698 1292252 pod_ready.go:94] pod "kube-apiserver-functional-758263" is "Ready"
	I1002 21:16:16.622712 1292252 pod_ready.go:86] duration metric: took 2.505347882s for pod "kube-apiserver-functional-758263" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:16:16.625200 1292252 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-758263" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:16:16.629565 1292252 pod_ready.go:94] pod "kube-controller-manager-functional-758263" is "Ready"
	I1002 21:16:16.629580 1292252 pod_ready.go:86] duration metric: took 4.365718ms for pod "kube-controller-manager-functional-758263" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:16:16.631712 1292252 pod_ready.go:83] waiting for pod "kube-proxy-slrzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:16:16.636103 1292252 pod_ready.go:94] pod "kube-proxy-slrzd" is "Ready"
	I1002 21:16:16.636117 1292252 pod_ready.go:86] duration metric: took 4.394033ms for pod "kube-proxy-slrzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:16:16.638320 1292252 pod_ready.go:83] waiting for pod "kube-scheduler-functional-758263" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:16:16.913361 1292252 pod_ready.go:94] pod "kube-scheduler-functional-758263" is "Ready"
	I1002 21:16:16.913375 1292252 pod_ready.go:86] duration metric: took 275.044415ms for pod "kube-scheduler-functional-758263" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 21:16:16.913386 1292252 pod_ready.go:40] duration metric: took 10.343244859s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 21:16:16.971970 1292252 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 21:16:16.975159 1292252 out.go:179] * Done! kubectl is now configured to use "functional-758263" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 21:16:53 functional-758263 crio[3522]: time="2025-10-02T21:16:53.775474552Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-qrqbc Namespace:default ID:900c1b61ae898c569ed1374254f4cddc7738a72378877c414d56f4ff02c0a8c1 UID:6a1b4cf7-e9df-4103-970c-123dac96332f NetNS:/var/run/netns/9fa1d19d-61ca-429a-8a0a-fe07cacc1e40 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400007aac0}] Aliases:map[]}"
	Oct 02 21:16:53 functional-758263 crio[3522]: time="2025-10-02T21:16:53.77562307Z" level=info msg="Checking pod default_hello-node-75c85bcc94-qrqbc for CNI network kindnet (type=ptp)"
	Oct 02 21:16:53 functional-758263 crio[3522]: time="2025-10-02T21:16:53.779468338Z" level=info msg="Ran pod sandbox 900c1b61ae898c569ed1374254f4cddc7738a72378877c414d56f4ff02c0a8c1 with infra container: default/hello-node-75c85bcc94-qrqbc/POD" id=85bc4723-5b0d-422a-9a2e-919d540ee62b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:16:53 functional-758263 crio[3522]: time="2025-10-02T21:16:53.78075738Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6836a48a-56fa-44b7-9b09-9aabfad956e9 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.674590595Z" level=info msg="Stopping pod sandbox: 29b5770e186a50c3b0d4a59f834b30482f4d56e2cf22bdc7849a7520df48181a" id=f3f1bdb4-ddce-45b0-bdf9-d663fff0dfd8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.674654979Z" level=info msg="Stopped pod sandbox (already stopped): 29b5770e186a50c3b0d4a59f834b30482f4d56e2cf22bdc7849a7520df48181a" id=f3f1bdb4-ddce-45b0-bdf9-d663fff0dfd8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.675056169Z" level=info msg="Removing pod sandbox: 29b5770e186a50c3b0d4a59f834b30482f4d56e2cf22bdc7849a7520df48181a" id=aa7d8553-4a18-43d4-9f64-02d3432e61d9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.678670238Z" level=info msg="Removed pod sandbox: 29b5770e186a50c3b0d4a59f834b30482f4d56e2cf22bdc7849a7520df48181a" id=aa7d8553-4a18-43d4-9f64-02d3432e61d9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.679228077Z" level=info msg="Stopping pod sandbox: 9dcc2635f569bea423bf14afa32a6032388d31dafcef72be7600c84013e3942d" id=0141a8ce-fe2a-429e-afbf-cac240285f85 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.679278907Z" level=info msg="Stopped pod sandbox (already stopped): 9dcc2635f569bea423bf14afa32a6032388d31dafcef72be7600c84013e3942d" id=0141a8ce-fe2a-429e-afbf-cac240285f85 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.679638129Z" level=info msg="Removing pod sandbox: 9dcc2635f569bea423bf14afa32a6032388d31dafcef72be7600c84013e3942d" id=003d6a7f-9a2f-41ee-b319-c6ffe147929d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.68312342Z" level=info msg="Removed pod sandbox: 9dcc2635f569bea423bf14afa32a6032388d31dafcef72be7600c84013e3942d" id=003d6a7f-9a2f-41ee-b319-c6ffe147929d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.683633614Z" level=info msg="Stopping pod sandbox: 7da98a3c5c6792a2b65ff73f22e3fb8b729a0bcbe1fe2e8a7e5a9d6746012f43" id=3f9bdd3c-4178-4b66-9220-90eed663e49a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.683680727Z" level=info msg="Stopped pod sandbox (already stopped): 7da98a3c5c6792a2b65ff73f22e3fb8b729a0bcbe1fe2e8a7e5a9d6746012f43" id=3f9bdd3c-4178-4b66-9220-90eed663e49a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.684013758Z" level=info msg="Removing pod sandbox: 7da98a3c5c6792a2b65ff73f22e3fb8b729a0bcbe1fe2e8a7e5a9d6746012f43" id=2a36e58a-4356-468e-af17-d201432763eb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 21:16:58 functional-758263 crio[3522]: time="2025-10-02T21:16:58.6875576Z" level=info msg="Removed pod sandbox: 7da98a3c5c6792a2b65ff73f22e3fb8b729a0bcbe1fe2e8a7e5a9d6746012f43" id=2a36e58a-4356-468e-af17-d201432763eb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 02 21:17:04 functional-758263 crio[3522]: time="2025-10-02T21:17:04.743459212Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=31511a81-28ae-4415-81f6-4e512719d0ea name=/runtime.v1.ImageService/PullImage
	Oct 02 21:17:14 functional-758263 crio[3522]: time="2025-10-02T21:17:14.741255626Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=be0c219b-3e97-4647-b81d-b0b9a3030e92 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:17:31 functional-758263 crio[3522]: time="2025-10-02T21:17:31.740806867Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7e906f66-b06b-495b-b913-d6ac6caf41cc name=/runtime.v1.ImageService/PullImage
	Oct 02 21:17:55 functional-758263 crio[3522]: time="2025-10-02T21:17:55.740226345Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6b502b98-15d2-4771-9118-fb43edf2a795 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:18:26 functional-758263 crio[3522]: time="2025-10-02T21:18:26.747235962Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ef196d2e-2b69-42a9-a76b-c08e2446eb71 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:19:23 functional-758263 crio[3522]: time="2025-10-02T21:19:23.741282921Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b964b937-7ae1-4819-a44f-f7bbad0b55e5 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:19:58 functional-758263 crio[3522]: time="2025-10-02T21:19:58.7416482Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=98d2d9dd-a024-464f-8ebf-ad46d93c0db7 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:22:05 functional-758263 crio[3522]: time="2025-10-02T21:22:05.740261606Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b79a665d-4861-4eb5-92fc-c54b74310488 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:22:51 functional-758263 crio[3522]: time="2025-10-02T21:22:51.740773005Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b54fff72-b034-4a07-be22-3b5386b9ed7d name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0afea8d4dd1a0       docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992   9 minutes ago       Running             myfrontend                0                   f1aa90dde18cc       sp-pod                                      default
	43deea3e7c7a1       docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac   10 minutes ago      Running             nginx                     0                   0f1c3e48541ec       nginx-svc                                   default
	f90a6c2bed5ba       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   4136efc3bc3c1       kube-proxy-slrzd                            kube-system
	b87823e2bf579       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   8263d78b303b9       coredns-66bc5c9577-pnljj                    kube-system
	1721f7d9a1d63       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   a76708bac7d01       storage-provisioner                         kube-system
	47d5f3b806ce7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   e107a622b8075       kindnet-m8fgt                               kube-system
	1320e2572ff1d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   b54c918950e5f       kube-apiserver-functional-758263            kube-system
	5a412e58588d7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   23b828ee26204       kube-controller-manager-functional-758263   kube-system
	22a67d1abd00f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   b527bcba293ce       kube-scheduler-functional-758263            kube-system
	7178c5dd3e3a3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   a598d29f6676d       etcd-functional-758263                      kube-system
	9649afe1b6b38       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   8263d78b303b9       coredns-66bc5c9577-pnljj                    kube-system
	1f6f48a5b1f7b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   a76708bac7d01       storage-provisioner                         kube-system
	9e7f6089695e6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   4136efc3bc3c1       kube-proxy-slrzd                            kube-system
	27c369b17a31d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   e107a622b8075       kindnet-m8fgt                               kube-system
	efbeb7877342f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   b527bcba293ce       kube-scheduler-functional-758263            kube-system
	607ebb15c572b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   a598d29f6676d       etcd-functional-758263                      kube-system
	dc5f5956dc79f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   23b828ee26204       kube-controller-manager-functional-758263   kube-system
	
	
	==> coredns [9649afe1b6b38b025beb68c79c70b6998d67541c0c7b6196d99facb1ab57420a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41292 - 62827 "HINFO IN 9052580680667968297.5267857705183436172. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017930569s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b87823e2bf579a0cb6676584b322ba01d8b35297bde4bda2d2c3e0387f9e1d7c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44682 - 56646 "HINFO IN 8712147276886260648.1978269169279335295. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012207168s
	
	
	==> describe nodes <==
	Name:               functional-758263
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-758263
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=functional-758263
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T21_14_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 21:14:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-758263
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 21:26:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 21:24:54 +0000   Thu, 02 Oct 2025 21:14:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 21:24:54 +0000   Thu, 02 Oct 2025 21:14:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 21:24:54 +0000   Thu, 02 Oct 2025 21:14:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 21:24:54 +0000   Thu, 02 Oct 2025 21:15:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-758263
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3bc6ada30ac4cc2920f849d7f5c645e
	  System UUID:                de38b351-ea69-4b31-b1b5-abda59afe78d
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-qrqbc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  default                     hello-node-connect-7d85dfc575-9hcjc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-pnljj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-758263                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-m8fgt                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-758263             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-758263    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-slrzd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-758263             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-758263 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-758263 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-758263 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-758263 event: Registered Node functional-758263 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-758263 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-758263 event: Registered Node functional-758263 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-758263 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-758263 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-758263 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-758263 event: Registered Node functional-758263 in Controller
	
	
	==> dmesg <==
	[Oct 2 20:02] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 21:05] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 21:06] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [607ebb15c572b7ce6cb71745b1dfd48219ff83d9427594c574c016048368e79d] <==
	{"level":"warn","ts":"2025-10-02T21:15:25.042289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:15:25.067498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:15:25.104746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:15:25.142191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:15:25.185326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:15:25.214123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:15:25.357904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44600","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:15:46.387973Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T21:15:46.388026Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-758263","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T21:15:46.388126Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:15:46.654846Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T21:15:46.656339Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:15:46.656413Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T21:15:46.656476Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T21:15:46.656498Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T21:15:46.656600Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:15:46.656677Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:15:46.656725Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T21:15:46.656810Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T21:15:46.656833Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T21:15:46.656844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:15:46.660289Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T21:15:46.660380Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T21:15:46.660471Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T21:15:46.660503Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-758263","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [7178c5dd3e3a359a98386a1cd9f57b3b7e8f9d2ebe17b0e7e5a3566c18479af6] <==
	{"level":"warn","ts":"2025-10-02T21:16:01.570603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.611086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.648224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.701817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.713518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.733960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.762153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.787809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.820060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.867192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.902339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.919080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.960893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:01.980877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:02.014206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:02.024836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:02.059158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:02.110330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:02.127562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:02.150022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:02.177388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T21:16:02.354828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51790","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T21:26:00.380911Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1092}
	{"level":"info","ts":"2025-10-02T21:26:00.409275Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1092,"took":"27.870651ms","hash":79245168,"current-db-size-bytes":3289088,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1384448,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-02T21:26:00.409340Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":79245168,"revision":1092,"compact-revision":-1}
	
	
	==> kernel <==
	 21:26:37 up  6:08,  0 user,  load average: 0.42, 0.55, 1.61
	Linux functional-758263 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [27c369b17a31d3f181e16671b68067543ed36a61ccd50bfc1caf0af19514dc7d] <==
	I1002 21:15:21.607393       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 21:15:21.609455       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 21:15:21.609590       1 main.go:148] setting mtu 1500 for CNI 
	I1002 21:15:21.609602       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 21:15:21.609613       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T21:15:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 21:15:21.848636       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 21:15:21.848676       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 21:15:21.848686       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 21:15:21.849428       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 21:15:26.849840       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 21:15:26.849931       1 metrics.go:72] Registering metrics
	I1002 21:15:26.849998       1 controller.go:711] "Syncing nftables rules"
	I1002 21:15:31.848373       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:15:31.848437       1 main.go:301] handling current node
	I1002 21:15:41.848401       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:15:41.848435       1 main.go:301] handling current node
	
	
	==> kindnet [47d5f3b806ce71de6c8e7166c4f678581fb90ef8b38f3d62d9f0793775f86b87] <==
	I1002 21:24:34.412218       1 main.go:301] handling current node
	I1002 21:24:44.403679       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:24:44.403735       1 main.go:301] handling current node
	I1002 21:24:54.403961       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:24:54.403997       1 main.go:301] handling current node
	I1002 21:25:04.403873       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:25:04.403980       1 main.go:301] handling current node
	I1002 21:25:14.404561       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:25:14.404593       1 main.go:301] handling current node
	I1002 21:25:24.404475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:25:24.404510       1 main.go:301] handling current node
	I1002 21:25:34.406885       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:25:34.406919       1 main.go:301] handling current node
	I1002 21:25:44.407376       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:25:44.407410       1 main.go:301] handling current node
	I1002 21:25:54.405397       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:25:54.405435       1 main.go:301] handling current node
	I1002 21:26:04.410125       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:26:04.410236       1 main.go:301] handling current node
	I1002 21:26:14.404534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:26:14.404587       1 main.go:301] handling current node
	I1002 21:26:24.404048       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:26:24.404171       1 main.go:301] handling current node
	I1002 21:26:34.410427       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:26:34.410461       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1320e2572ff1d2d4a390a795596b0232e570571828339ae9d5521e4fb1eee638] <==
	I1002 21:16:03.378633       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:16:03.383781       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 21:16:03.385028       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 21:16:03.385387       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 21:16:03.385499       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 21:16:03.385639       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 21:16:03.385704       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 21:16:03.397520       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1002 21:16:03.424012       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 21:16:03.698651       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 21:16:04.081058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:16:05.191392       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 21:16:05.346678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 21:16:05.437923       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:16:05.450724       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:16:06.726224       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 21:16:06.757334       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 21:16:07.048332       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:16:20.287730       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.63.246"}
	I1002 21:16:26.295637       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.168.13"}
	I1002 21:16:35.073250       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.172.131"}
	E1002 21:16:45.150874       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35710: use of closed network connection
	E1002 21:16:53.318453       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56986: use of closed network connection
	I1002 21:16:53.527615       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.84.62"}
	I1002 21:26:03.324350       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5a412e58588d7fbb773eb22c00702d931be7262445a40492c4993b4c0c7345e0] <==
	I1002 21:16:06.683040       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:16:06.683145       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 21:16:06.683180       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 21:16:06.690191       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 21:16:06.691505       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 21:16:06.692129       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 21:16:06.692176       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 21:16:06.692295       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:16:06.692424       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 21:16:06.692501       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:16:06.692575       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 21:16:06.692678       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 21:16:06.693982       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 21:16:06.698102       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 21:16:06.702109       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 21:16:06.714631       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 21:16:06.714764       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 21:16:06.714813       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 21:16:06.714843       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 21:16:06.714870       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 21:16:06.715180       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 21:16:06.715263       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:16:06.715331       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:16:06.715359       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 21:16:06.722153       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [dc5f5956dc79f730e1c4f1ac3a6c69f0cd05f7ae5180834110ca563aab08e8eb] <==
	I1002 21:15:29.614081       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 21:15:29.616538       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:15:29.626954       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 21:15:29.626967       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 21:15:29.628100       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 21:15:29.631332       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 21:15:29.633572       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 21:15:29.634062       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 21:15:29.635867       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 21:15:29.635924       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:15:29.637060       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 21:15:29.639288       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 21:15:29.640461       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 21:15:29.644804       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 21:15:29.648042       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 21:15:29.653386       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 21:15:29.657647       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 21:15:29.660044       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 21:15:29.660090       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 21:15:29.660137       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 21:15:29.666427       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 21:15:29.667675       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 21:15:29.674251       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 21:15:29.674278       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 21:15:29.674286       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [9e7f6089695e63d5410202135d50fe9726888f4898f664273a1013ffd6110393] <==
	I1002 21:15:22.718423       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:15:25.279195       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:15:27.046110       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:15:27.046179       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 21:15:27.073117       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:15:27.314741       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:15:27.314858       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:15:27.320303       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:15:27.320639       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:15:27.320692       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:15:27.322699       1 config.go:200] "Starting service config controller"
	I1002 21:15:27.325819       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:15:27.325893       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:15:27.325936       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:15:27.325963       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:15:27.326661       1 config.go:309] "Starting node config controller"
	I1002 21:15:27.326723       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:15:27.326754       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:15:27.333707       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:15:27.426297       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:15:27.455968       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 21:15:27.463003       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [f90a6c2bed5ba06a9d73ce4452bacf2eda24f680cbe06c653e085b3fd4f894d0] <==
	I1002 21:16:04.199466       1 server_linux.go:53] "Using iptables proxy"
	I1002 21:16:04.321190       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 21:16:04.422628       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 21:16:04.422661       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 21:16:04.422754       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 21:16:04.446356       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:16:04.446473       1 server_linux.go:132] "Using iptables Proxier"
	I1002 21:16:04.450446       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 21:16:04.450891       1 server.go:527] "Version info" version="v1.34.1"
	I1002 21:16:04.450950       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:16:04.452177       1 config.go:106] "Starting endpoint slice config controller"
	I1002 21:16:04.452238       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 21:16:04.452532       1 config.go:200] "Starting service config controller"
	I1002 21:16:04.452580       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 21:16:04.452929       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 21:16:04.452970       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 21:16:04.453652       1 config.go:309] "Starting node config controller"
	I1002 21:16:04.453747       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 21:16:04.453802       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 21:16:04.553188       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 21:16:04.553195       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 21:16:04.553297       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [22a67d1abd00ff4bb81112e16785520bef31ff5326c93946f4d81a8dfa7618f6] <==
	I1002 21:16:02.261243       1 serving.go:386] Generated self-signed cert in-memory
	W1002 21:16:03.206678       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 21:16:03.206792       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 21:16:03.206827       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 21:16:03.206858       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 21:16:03.337696       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:16:03.337797       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:16:03.343909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:16:03.346245       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:16:03.346334       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:16:03.346378       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:16:03.446388       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [efbeb7877342fabd1712b7e53d81d9c0b64f7ce72b093f1612b5eba567988134] <==
	I1002 21:15:22.327609       1 serving.go:386] Generated self-signed cert in-memory
	W1002 21:15:26.618183       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1002 21:15:26.618290       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 21:15:26.618325       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1002 21:15:26.618354       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1002 21:15:26.801415       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 21:15:26.801512       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:15:26.813524       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 21:15:26.818163       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:15:26.818265       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:15:26.818312       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 21:15:26.926266       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:15:46.393509       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 21:15:46.393550       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 21:15:46.393572       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 21:15:46.393609       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 21:15:46.393860       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 21:15:46.393889       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 21:23:59 functional-758263 kubelet[3843]: E1002 21:23:59.740279    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:24:02 functional-758263 kubelet[3843]: E1002 21:24:02.740876    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:24:14 functional-758263 kubelet[3843]: E1002 21:24:14.740569    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:24:17 functional-758263 kubelet[3843]: E1002 21:24:17.740584    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:24:27 functional-758263 kubelet[3843]: E1002 21:24:27.740558    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:24:30 functional-758263 kubelet[3843]: E1002 21:24:30.740777    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:24:39 functional-758263 kubelet[3843]: E1002 21:24:39.740071    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:24:41 functional-758263 kubelet[3843]: E1002 21:24:41.740769    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:24:50 functional-758263 kubelet[3843]: E1002 21:24:50.740434    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:24:54 functional-758263 kubelet[3843]: E1002 21:24:54.740663    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:25:05 functional-758263 kubelet[3843]: E1002 21:25:05.740182    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:25:08 functional-758263 kubelet[3843]: E1002 21:25:08.740375    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:25:17 functional-758263 kubelet[3843]: E1002 21:25:17.739849    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:25:22 functional-758263 kubelet[3843]: E1002 21:25:22.740074    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:25:32 functional-758263 kubelet[3843]: E1002 21:25:32.741216    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:25:33 functional-758263 kubelet[3843]: E1002 21:25:33.739990    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:25:46 functional-758263 kubelet[3843]: E1002 21:25:46.741279    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:25:47 functional-758263 kubelet[3843]: E1002 21:25:47.740285    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:25:59 functional-758263 kubelet[3843]: E1002 21:25:59.740559    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:26:00 functional-758263 kubelet[3843]: E1002 21:26:00.740451    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:26:10 functional-758263 kubelet[3843]: E1002 21:26:10.740474    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:26:13 functional-758263 kubelet[3843]: E1002 21:26:13.740522    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:26:21 functional-758263 kubelet[3843]: E1002 21:26:21.740263    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	Oct 02 21:26:28 functional-758263 kubelet[3843]: E1002 21:26:28.740326    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-qrqbc" podUID="6a1b4cf7-e9df-4103-970c-123dac96332f"
	Oct 02 21:26:32 functional-758263 kubelet[3843]: E1002 21:26:32.741487    3843 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hcjc" podUID="81a5a591-6c1f-4566-a01d-d26bb8987cb3"
	
	
	==> storage-provisioner [1721f7d9a1d634331628f32dcf08c0286564b08c6d3a0670b1327803b3bd0d5d] <==
	W1002 21:26:12.305417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:14.308481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:14.314953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:16.317601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:16.323943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:18.327334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:18.331497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:20.334474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:20.339125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:22.342408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:22.348987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:24.352228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:24.356660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:26.359846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:26.366584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:28.369530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:28.374123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:30.377175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:30.381847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:32.384303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:32.390889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:34.393912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:34.398511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:36.402114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:26:36.407240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [1f6f48a5b1f7b3be987e37cae85b21ee4867995b17a6e5d5e07b06129e59d920] <==
	I1002 21:15:22.451217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:15:26.877018       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:15:26.936289       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 21:15:26.960355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:15:30.416117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:15:34.676213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:15:38.275157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:15:41.328854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:15:44.351310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:15:44.359137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:15:44.359392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:15:44.361867       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-758263_a04bc00d-4bf0-472b-a476-fc8ab7b10338!
	I1002 21:15:44.362120       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"71a43639-2a8a-431e-bea9-5950559fe4bd", APIVersion:"v1", ResourceVersion:"527", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-758263_a04bc00d-4bf0-472b-a476-fc8ab7b10338 became leader
	W1002 21:15:44.377350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 21:15:44.382837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 21:15:44.462329       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-758263_a04bc00d-4bf0-472b-a476-fc8ab7b10338!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-758263 -n functional-758263
helpers_test.go:269: (dbg) Run:  kubectl --context functional-758263 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-qrqbc hello-node-connect-7d85dfc575-9hcjc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-758263 describe pod hello-node-75c85bcc94-qrqbc hello-node-connect-7d85dfc575-9hcjc
helpers_test.go:290: (dbg) kubectl --context functional-758263 describe pod hello-node-75c85bcc94-qrqbc hello-node-connect-7d85dfc575-9hcjc:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-qrqbc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-758263/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 21:16:53 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wvxmw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wvxmw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m44s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-qrqbc to functional-758263
	  Normal   Pulling    6m40s (x5 over 9m45s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m40s (x5 over 9m45s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m40s (x5 over 9m45s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m42s (x20 over 9m44s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m27s (x21 over 9m44s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-9hcjc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-758263/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 21:16:34 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dp26g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dp26g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9hcjc to functional-758263
	  Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m1s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-758263 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-758263 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-qrqbc" [6a1b4cf7-e9df-4103-970c-123dac96332f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1002 21:16:58.450888 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:19:14.582762 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:19:42.293294 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:24:14.582499 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-758263 -n functional-758263
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-02 21:26:53.964485243 +0000 UTC m=+1279.212944505
functional_test.go:1460: (dbg) Run:  kubectl --context functional-758263 describe po hello-node-75c85bcc94-qrqbc -n default
functional_test.go:1460: (dbg) kubectl --context functional-758263 describe po hello-node-75c85bcc94-qrqbc -n default:
Name:             hello-node-75c85bcc94-qrqbc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-758263/192.168.49.2
Start Time:       Thu, 02 Oct 2025 21:16:53 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wvxmw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wvxmw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-qrqbc to functional-758263
Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-758263 logs hello-node-75c85bcc94-qrqbc -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-758263 logs hello-node-75c85bcc94-qrqbc -n default: exit status 1 (116.882476ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-qrqbc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-758263 logs hello-node-75c85bcc94-qrqbc -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 service --namespace=default --https --url hello-node: exit status 115 (494.483354ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31606
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-758263 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 service hello-node --url --format={{.IP}}: exit status 115 (462.909824ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-758263 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 service hello-node --url: exit status 115 (489.63218ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31606
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-758263 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31606
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image load --daemon kicbase/echo-server:functional-758263 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-758263" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image load --daemon kicbase/echo-server:functional-758263 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-758263" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-758263
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image load --daemon kicbase/echo-server:functional-758263 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-758263" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image save kicbase/echo-server:functional-758263 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1002 21:27:07.088191 1300120 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:27:07.089043 1300120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:27:07.089083 1300120 out.go:374] Setting ErrFile to fd 2...
	I1002 21:27:07.089102 1300120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:27:07.089401 1300120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:27:07.090183 1300120 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:27:07.090360 1300120 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:27:07.090912 1300120 cli_runner.go:164] Run: docker container inspect functional-758263 --format={{.State.Status}}
	I1002 21:27:07.109903 1300120 ssh_runner.go:195] Run: systemctl --version
	I1002 21:27:07.109969 1300120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
	I1002 21:27:07.133490 1300120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
	I1002 21:27:07.237306 1300120 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1002 21:27:07.237375 1300120 cache_images.go:254] Failed to load cached images for "functional-758263": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1002 21:27:07.237396 1300120 cache_images.go:266] failed pushing to: functional-758263

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-758263
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image save --daemon kicbase/echo-server:functional-758263 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-758263
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-758263: exit status 1 (17.905662ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-758263

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-758263

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.86s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-473817 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-473817 --output=json --user=testUser: exit status 80 (1.862957404s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1e94194c-e4dd-4779-98a0-d9976a4a7ae8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-473817 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"5f5654b7-4b85-4e50-8a66-8f8002984b3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-02T21:39:36Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"4442c9f8-9707-4565-bea8-92d5fde475c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-473817 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.86s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.9s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-473817 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-473817 --output=json --user=testUser: exit status 80 (1.895688191s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18b787de-f533-4803-8461-d7d15ca6752d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-473817 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"9a1d3155-56e1-426c-b1f3-abc928bf87e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-02T21:39:38Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"2f22ce58-6c4a-4e3d-8990-6f00221c45dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_1.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-473817 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.90s)

                                                
                                    
x
+
TestPause/serial/Pause (7.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-449722 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-449722 --alsologtostderr -v=5: exit status 80 (2.625151178s)

                                                
                                                
-- stdout --
	* Pausing node pause-449722 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:02:16.687622 1431964 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:02:16.690145 1431964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:02:16.690163 1431964 out.go:374] Setting ErrFile to fd 2...
	I1002 22:02:16.690169 1431964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:02:16.690484 1431964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:02:16.690763 1431964 out.go:368] Setting JSON to false
	I1002 22:02:16.690782 1431964 mustload.go:65] Loading cluster: pause-449722
	I1002 22:02:16.691250 1431964 config.go:182] Loaded profile config "pause-449722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:02:16.691909 1431964 cli_runner.go:164] Run: docker container inspect pause-449722 --format={{.State.Status}}
	I1002 22:02:16.716482 1431964 host.go:66] Checking if "pause-449722" exists ...
	I1002 22:02:16.716803 1431964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:02:16.848033 1431964 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:02:16.82196464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:02:16.848686 1431964 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-449722 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 22:02:16.851623 1431964 out.go:179] * Pausing node pause-449722 ... 
	I1002 22:02:16.855413 1431964 host.go:66] Checking if "pause-449722" exists ...
	I1002 22:02:16.855774 1431964 ssh_runner.go:195] Run: systemctl --version
	I1002 22:02:16.855830 1431964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:16.890335 1431964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:16.993172 1431964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:02:17.011809 1431964 pause.go:51] kubelet running: true
	I1002 22:02:17.011885 1431964 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:02:17.293372 1431964 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:02:17.293471 1431964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:02:17.368326 1431964 cri.go:89] found id: "5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62"
	I1002 22:02:17.368349 1431964 cri.go:89] found id: "5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c"
	I1002 22:02:17.368353 1431964 cri.go:89] found id: "c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4"
	I1002 22:02:17.368357 1431964 cri.go:89] found id: "713a19e763d7a178a10c28fa8eeb30dfbd5bb39584d54662781fe7de08ed1508"
	I1002 22:02:17.368361 1431964 cri.go:89] found id: "d95817eb0545540da41b79aa299ac44bbdf51c90c190253f76134693bfc68110"
	I1002 22:02:17.368365 1431964 cri.go:89] found id: "a0fd68147fd14125fbf51aaf91faf2600d48d2658ab5eb4cddf8e56dfb570ba5"
	I1002 22:02:17.368369 1431964 cri.go:89] found id: "03c5cbc703625cd4a19470cd8ded95d449ef5493a4d596257f661e8675b3b0d2"
	I1002 22:02:17.368372 1431964 cri.go:89] found id: "7d066cc2d1e8459f5821b7e2796df942922d7d24410d2bd3519fab226daf4e37"
	I1002 22:02:17.368375 1431964 cri.go:89] found id: "eb6bcce816ee7f942f025c340187e101d9d7d3acc2bb3674588bea743c9a9cda"
	I1002 22:02:17.368385 1431964 cri.go:89] found id: "62db619520d973c09cde32d258e552287941877c1bec4ccbc118c0a24c051bdc"
	I1002 22:02:17.368389 1431964 cri.go:89] found id: "438fa54f734f4d94bf42128793810a6de0cd1720960c2ff8cd1d7300631fc51e"
	I1002 22:02:17.368393 1431964 cri.go:89] found id: "929ff0e798eeeb715e148515907bc2d9402bda656b33a794a6dc784f5fdcf30e"
	I1002 22:02:17.368396 1431964 cri.go:89] found id: "4d2e10670089995d9fcf18aa65a0b693d4b56c4e45f890dceea0a94112efda04"
	I1002 22:02:17.368398 1431964 cri.go:89] found id: "37a62fce49783ed1731b18e1bdd96cd63f9ac8414730a0deb92d32309b32d5f7"
	I1002 22:02:17.368402 1431964 cri.go:89] found id: ""
	I1002 22:02:17.368452 1431964 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:02:17.379209 1431964 retry.go:31] will retry after 236.243512ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:02:17Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:02:17.615711 1431964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:02:17.629034 1431964 pause.go:51] kubelet running: false
	I1002 22:02:17.629100 1431964 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:02:17.779359 1431964 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:02:17.779452 1431964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:02:17.848096 1431964 cri.go:89] found id: "5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62"
	I1002 22:02:17.848164 1431964 cri.go:89] found id: "5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c"
	I1002 22:02:17.848175 1431964 cri.go:89] found id: "c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4"
	I1002 22:02:17.848179 1431964 cri.go:89] found id: "713a19e763d7a178a10c28fa8eeb30dfbd5bb39584d54662781fe7de08ed1508"
	I1002 22:02:17.848182 1431964 cri.go:89] found id: "d95817eb0545540da41b79aa299ac44bbdf51c90c190253f76134693bfc68110"
	I1002 22:02:17.848186 1431964 cri.go:89] found id: "a0fd68147fd14125fbf51aaf91faf2600d48d2658ab5eb4cddf8e56dfb570ba5"
	I1002 22:02:17.848189 1431964 cri.go:89] found id: "03c5cbc703625cd4a19470cd8ded95d449ef5493a4d596257f661e8675b3b0d2"
	I1002 22:02:17.848192 1431964 cri.go:89] found id: "7d066cc2d1e8459f5821b7e2796df942922d7d24410d2bd3519fab226daf4e37"
	I1002 22:02:17.848202 1431964 cri.go:89] found id: "eb6bcce816ee7f942f025c340187e101d9d7d3acc2bb3674588bea743c9a9cda"
	I1002 22:02:17.848212 1431964 cri.go:89] found id: "62db619520d973c09cde32d258e552287941877c1bec4ccbc118c0a24c051bdc"
	I1002 22:02:17.848216 1431964 cri.go:89] found id: "438fa54f734f4d94bf42128793810a6de0cd1720960c2ff8cd1d7300631fc51e"
	I1002 22:02:17.848219 1431964 cri.go:89] found id: "929ff0e798eeeb715e148515907bc2d9402bda656b33a794a6dc784f5fdcf30e"
	I1002 22:02:17.848223 1431964 cri.go:89] found id: "4d2e10670089995d9fcf18aa65a0b693d4b56c4e45f890dceea0a94112efda04"
	I1002 22:02:17.848234 1431964 cri.go:89] found id: "37a62fce49783ed1731b18e1bdd96cd63f9ac8414730a0deb92d32309b32d5f7"
	I1002 22:02:17.848240 1431964 cri.go:89] found id: ""
	I1002 22:02:17.848291 1431964 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:02:17.859530 1431964 retry.go:31] will retry after 237.423788ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:02:17Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:02:18.098145 1431964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:02:18.111618 1431964 pause.go:51] kubelet running: false
	I1002 22:02:18.111688 1431964 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:02:18.261789 1431964 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:02:18.261880 1431964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:02:18.325052 1431964 cri.go:89] found id: "5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62"
	I1002 22:02:18.325124 1431964 cri.go:89] found id: "5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c"
	I1002 22:02:18.325143 1431964 cri.go:89] found id: "c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4"
	I1002 22:02:18.325159 1431964 cri.go:89] found id: "713a19e763d7a178a10c28fa8eeb30dfbd5bb39584d54662781fe7de08ed1508"
	I1002 22:02:18.325189 1431964 cri.go:89] found id: "d95817eb0545540da41b79aa299ac44bbdf51c90c190253f76134693bfc68110"
	I1002 22:02:18.325209 1431964 cri.go:89] found id: "a0fd68147fd14125fbf51aaf91faf2600d48d2658ab5eb4cddf8e56dfb570ba5"
	I1002 22:02:18.325224 1431964 cri.go:89] found id: "03c5cbc703625cd4a19470cd8ded95d449ef5493a4d596257f661e8675b3b0d2"
	I1002 22:02:18.325241 1431964 cri.go:89] found id: "7d066cc2d1e8459f5821b7e2796df942922d7d24410d2bd3519fab226daf4e37"
	I1002 22:02:18.325259 1431964 cri.go:89] found id: "eb6bcce816ee7f942f025c340187e101d9d7d3acc2bb3674588bea743c9a9cda"
	I1002 22:02:18.325291 1431964 cri.go:89] found id: "62db619520d973c09cde32d258e552287941877c1bec4ccbc118c0a24c051bdc"
	I1002 22:02:18.325313 1431964 cri.go:89] found id: "438fa54f734f4d94bf42128793810a6de0cd1720960c2ff8cd1d7300631fc51e"
	I1002 22:02:18.325330 1431964 cri.go:89] found id: "929ff0e798eeeb715e148515907bc2d9402bda656b33a794a6dc784f5fdcf30e"
	I1002 22:02:18.325348 1431964 cri.go:89] found id: "4d2e10670089995d9fcf18aa65a0b693d4b56c4e45f890dceea0a94112efda04"
	I1002 22:02:18.325370 1431964 cri.go:89] found id: "37a62fce49783ed1731b18e1bdd96cd63f9ac8414730a0deb92d32309b32d5f7"
	I1002 22:02:18.325397 1431964 cri.go:89] found id: ""
	I1002 22:02:18.325469 1431964 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:02:18.336279 1431964 retry.go:31] will retry after 636.069775ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:02:18Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:02:18.973174 1431964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:02:18.986277 1431964 pause.go:51] kubelet running: false
	I1002 22:02:18.986345 1431964 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:02:19.129086 1431964 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:02:19.129181 1431964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:02:19.201820 1431964 cri.go:89] found id: "5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62"
	I1002 22:02:19.201875 1431964 cri.go:89] found id: "5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c"
	I1002 22:02:19.201881 1431964 cri.go:89] found id: "c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4"
	I1002 22:02:19.201884 1431964 cri.go:89] found id: "713a19e763d7a178a10c28fa8eeb30dfbd5bb39584d54662781fe7de08ed1508"
	I1002 22:02:19.201888 1431964 cri.go:89] found id: "d95817eb0545540da41b79aa299ac44bbdf51c90c190253f76134693bfc68110"
	I1002 22:02:19.201892 1431964 cri.go:89] found id: "a0fd68147fd14125fbf51aaf91faf2600d48d2658ab5eb4cddf8e56dfb570ba5"
	I1002 22:02:19.201895 1431964 cri.go:89] found id: "03c5cbc703625cd4a19470cd8ded95d449ef5493a4d596257f661e8675b3b0d2"
	I1002 22:02:19.201897 1431964 cri.go:89] found id: "7d066cc2d1e8459f5821b7e2796df942922d7d24410d2bd3519fab226daf4e37"
	I1002 22:02:19.201900 1431964 cri.go:89] found id: "eb6bcce816ee7f942f025c340187e101d9d7d3acc2bb3674588bea743c9a9cda"
	I1002 22:02:19.201906 1431964 cri.go:89] found id: "62db619520d973c09cde32d258e552287941877c1bec4ccbc118c0a24c051bdc"
	I1002 22:02:19.201966 1431964 cri.go:89] found id: "438fa54f734f4d94bf42128793810a6de0cd1720960c2ff8cd1d7300631fc51e"
	I1002 22:02:19.201970 1431964 cri.go:89] found id: "929ff0e798eeeb715e148515907bc2d9402bda656b33a794a6dc784f5fdcf30e"
	I1002 22:02:19.201973 1431964 cri.go:89] found id: "4d2e10670089995d9fcf18aa65a0b693d4b56c4e45f890dceea0a94112efda04"
	I1002 22:02:19.201976 1431964 cri.go:89] found id: "37a62fce49783ed1731b18e1bdd96cd63f9ac8414730a0deb92d32309b32d5f7"
	I1002 22:02:19.201979 1431964 cri.go:89] found id: ""
	I1002 22:02:19.202086 1431964 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:02:19.216544 1431964 out.go:203] 
	W1002 22:02:19.219588 1431964 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:02:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:02:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 22:02:19.219611 1431964 out.go:285] * 
	* 
	W1002 22:02:19.229067 1431964 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:02:19.231957 1431964 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-449722 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-449722
helpers_test.go:243: (dbg) docker inspect pause-449722:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd",
	        "Created": "2025-10-02T22:00:42.346436109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1426392,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:00:42.408851396Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd/hosts",
	        "LogPath": "/var/lib/docker/containers/3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd/3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd-json.log",
	        "Name": "/pause-449722",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-449722:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-449722",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd",
	                "LowerDir": "/var/lib/docker/overlay2/2a040f38fedb197ef5765eac719121cda0b16484d7b176fb92c08740f74716b3-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a040f38fedb197ef5765eac719121cda0b16484d7b176fb92c08740f74716b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a040f38fedb197ef5765eac719121cda0b16484d7b176fb92c08740f74716b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a040f38fedb197ef5765eac719121cda0b16484d7b176fb92c08740f74716b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-449722",
	                "Source": "/var/lib/docker/volumes/pause-449722/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-449722",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-449722",
	                "name.minikube.sigs.k8s.io": "pause-449722",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c8f56f37f34e3493ea2eb23a4219a2d5dad53c93d1fb4fb1bbd7aa431671faa",
	            "SandboxKey": "/var/run/docker/netns/8c8f56f37f34",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34526"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34527"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34528"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34529"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-449722": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:03:48:d2:c2:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "93e92c2d024dd4a9175c10b5adf12a14b699e27f18e62dfb5c2bdcd9fcdd0167",
	                    "EndpointID": "47b7dcf99c5e870c4ef1af66d8b61c00e758633d2a8ae94b09a855a9019b4797",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-449722",
	                        "3851e1deb0c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-449722 -n pause-449722
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-449722 -n pause-449722: exit status 2 (340.434304ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-449722 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-449722 logs -n 25: (1.571727889s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-732300 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ start   │ -p missing-upgrade-385082 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-385082    │ jenkins │ v1.32.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ start   │ -p NoKubernetes-732300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:58 UTC │
	│ start   │ -p missing-upgrade-385082 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-385082    │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:58 UTC │
	│ delete  │ -p NoKubernetes-732300                                                                                                                   │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ start   │ -p NoKubernetes-732300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ ssh     │ -p NoKubernetes-732300 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │                     │
	│ stop    │ -p NoKubernetes-732300                                                                                                                   │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ start   │ -p NoKubernetes-732300 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ delete  │ -p missing-upgrade-385082                                                                                                                │ missing-upgrade-385082    │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ start   │ -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-186867 │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:59 UTC │
	│ ssh     │ -p NoKubernetes-732300 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │                     │
	│ delete  │ -p NoKubernetes-732300                                                                                                                   │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ start   │ -p stopped-upgrade-679793 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-679793    │ jenkins │ v1.32.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:59 UTC │
	│ stop    │ -p kubernetes-upgrade-186867                                                                                                             │ kubernetes-upgrade-186867 │ jenkins │ v1.37.0 │ 02 Oct 25 21:59 UTC │ 02 Oct 25 21:59 UTC │
	│ start   │ -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-186867 │ jenkins │ v1.37.0 │ 02 Oct 25 21:59 UTC │                     │
	│ stop    │ stopped-upgrade-679793 stop                                                                                                              │ stopped-upgrade-679793    │ jenkins │ v1.32.0 │ 02 Oct 25 21:59 UTC │ 02 Oct 25 21:59 UTC │
	│ start   │ -p stopped-upgrade-679793 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-679793    │ jenkins │ v1.37.0 │ 02 Oct 25 21:59 UTC │ 02 Oct 25 21:59 UTC │
	│ delete  │ -p stopped-upgrade-679793                                                                                                                │ stopped-upgrade-679793    │ jenkins │ v1.37.0 │ 02 Oct 25 21:59 UTC │ 02 Oct 25 21:59 UTC │
	│ start   │ -p running-upgrade-578747 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-578747    │ jenkins │ v1.32.0 │ 02 Oct 25 21:59 UTC │ 02 Oct 25 22:00 UTC │
	│ start   │ -p running-upgrade-578747 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-578747    │ jenkins │ v1.37.0 │ 02 Oct 25 22:00 UTC │ 02 Oct 25 22:00 UTC │
	│ delete  │ -p running-upgrade-578747                                                                                                                │ running-upgrade-578747    │ jenkins │ v1.37.0 │ 02 Oct 25 22:00 UTC │ 02 Oct 25 22:00 UTC │
	│ start   │ -p pause-449722 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-449722              │ jenkins │ v1.37.0 │ 02 Oct 25 22:00 UTC │ 02 Oct 25 22:01 UTC │
	│ start   │ -p pause-449722 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-449722              │ jenkins │ v1.37.0 │ 02 Oct 25 22:01 UTC │ 02 Oct 25 22:02 UTC │
	│ pause   │ -p pause-449722 --alsologtostderr -v=5                                                                                                   │ pause-449722              │ jenkins │ v1.37.0 │ 02 Oct 25 22:02 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:01:59
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:01:59.226431 1430316 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:01:59.226547 1430316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:01:59.226558 1430316 out.go:374] Setting ErrFile to fd 2...
	I1002 22:01:59.226563 1430316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:01:59.226812 1430316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:01:59.227172 1430316 out.go:368] Setting JSON to false
	I1002 22:01:59.228106 1430316 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24245,"bootTime":1759418275,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:01:59.228171 1430316 start.go:140] virtualization:  
	I1002 22:01:59.232574 1430316 out.go:179] * [pause-449722] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:01:59.237434 1430316 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:01:59.237509 1430316 notify.go:220] Checking for updates...
	I1002 22:01:59.243683 1430316 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:01:59.246736 1430316 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:01:59.249682 1430316 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:01:59.252488 1430316 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:01:59.255411 1430316 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:01:59.258753 1430316 config.go:182] Loaded profile config "pause-449722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:01:59.259368 1430316 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:01:59.294599 1430316 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:01:59.294707 1430316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:01:59.384834 1430316 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:01:59.374551717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:01:59.384974 1430316 docker.go:318] overlay module found
	I1002 22:01:59.388177 1430316 out.go:179] * Using the docker driver based on existing profile
	I1002 22:01:59.392002 1430316 start.go:304] selected driver: docker
	I1002 22:01:59.392020 1430316 start.go:924] validating driver "docker" against &{Name:pause-449722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-449722 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:01:59.392149 1430316 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:01:59.392248 1430316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:01:59.476838 1430316 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:01:59.466168518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:01:59.477248 1430316 cni.go:84] Creating CNI manager for ""
	I1002 22:01:59.477319 1430316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:01:59.477367 1430316 start.go:348] cluster config:
	{Name:pause-449722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-449722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:01:59.483156 1430316 out.go:179] * Starting "pause-449722" primary control-plane node in "pause-449722" cluster
	I1002 22:01:59.485993 1430316 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:01:59.488898 1430316 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:01:59.491781 1430316 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:01:59.491842 1430316 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:01:59.491860 1430316 cache.go:58] Caching tarball of preloaded images
	I1002 22:01:59.491951 1430316 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:01:59.491966 1430316 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:01:59.492109 1430316 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/config.json ...
	I1002 22:01:59.492341 1430316 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:01:59.516548 1430316 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:01:59.516575 1430316 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:01:59.516596 1430316 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:01:59.516618 1430316 start.go:360] acquireMachinesLock for pause-449722: {Name:mk9f4e85f1e6af4159d662778a2e02f9e2b774c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:01:59.516681 1430316 start.go:364] duration metric: took 37.185µs to acquireMachinesLock for "pause-449722"
	I1002 22:01:59.516706 1430316 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:01:59.516718 1430316 fix.go:54] fixHost starting: 
	I1002 22:01:59.517020 1430316 cli_runner.go:164] Run: docker container inspect pause-449722 --format={{.State.Status}}
	I1002 22:01:59.535811 1430316 fix.go:112] recreateIfNeeded on pause-449722: state=Running err=<nil>
	W1002 22:01:59.535852 1430316 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:01:55.835544 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:01:55.835971 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:01:55.836022 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:01:55.836079 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:01:55.876766 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:01:55.876786 1418276 cri.go:89] found id: ""
	I1002 22:01:55.876794 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:01:55.876851 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:55.883511 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:01:55.883582 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:01:55.936608 1418276 cri.go:89] found id: ""
	I1002 22:01:55.936636 1418276 logs.go:282] 0 containers: []
	W1002 22:01:55.936644 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:01:55.936651 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:01:55.936710 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:01:55.974833 1418276 cri.go:89] found id: ""
	I1002 22:01:55.974857 1418276 logs.go:282] 0 containers: []
	W1002 22:01:55.974872 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:01:55.974879 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:01:55.974931 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:01:56.013594 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:01:56.013620 1418276 cri.go:89] found id: ""
	I1002 22:01:56.013629 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:01:56.013690 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:56.017977 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:01:56.018090 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:01:56.048834 1418276 cri.go:89] found id: ""
	I1002 22:01:56.048860 1418276 logs.go:282] 0 containers: []
	W1002 22:01:56.048874 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:01:56.048882 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:01:56.048941 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:01:56.078998 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:01:56.079024 1418276 cri.go:89] found id: ""
	I1002 22:01:56.079034 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:01:56.079124 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:56.083182 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:01:56.083262 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:01:56.111329 1418276 cri.go:89] found id: ""
	I1002 22:01:56.111370 1418276 logs.go:282] 0 containers: []
	W1002 22:01:56.111380 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:01:56.111387 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:01:56.111460 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:01:56.139377 1418276 cri.go:89] found id: ""
	I1002 22:01:56.139400 1418276 logs.go:282] 0 containers: []
	W1002 22:01:56.139409 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:01:56.139418 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:01:56.139435 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:01:56.256371 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:01:56.256413 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:01:56.279639 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:01:56.279673 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:01:56.390670 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:01:56.390695 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:01:56.390734 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:01:56.426347 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:01:56.426380 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:01:56.482382 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:01:56.482418 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:01:56.514457 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:01:56.514484 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:01:56.576640 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:01:56.576735 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:01:59.108176 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:01:59.108556 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:01:59.108595 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:01:59.108648 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:01:59.180518 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:01:59.180538 1418276 cri.go:89] found id: ""
	I1002 22:01:59.180546 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:01:59.180615 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:59.185494 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:01:59.185567 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:01:59.230782 1418276 cri.go:89] found id: ""
	I1002 22:01:59.230888 1418276 logs.go:282] 0 containers: []
	W1002 22:01:59.230896 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:01:59.230903 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:01:59.230955 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:01:59.262637 1418276 cri.go:89] found id: ""
	I1002 22:01:59.262720 1418276 logs.go:282] 0 containers: []
	W1002 22:01:59.262732 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:01:59.262740 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:01:59.262796 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:01:59.311788 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:01:59.311807 1418276 cri.go:89] found id: ""
	I1002 22:01:59.311816 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:01:59.311869 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:59.318663 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:01:59.318739 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:01:59.380315 1418276 cri.go:89] found id: ""
	I1002 22:01:59.380337 1418276 logs.go:282] 0 containers: []
	W1002 22:01:59.380344 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:01:59.380351 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:01:59.380406 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:01:59.428720 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:01:59.428743 1418276 cri.go:89] found id: ""
	I1002 22:01:59.428770 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:01:59.428845 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:59.433082 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:01:59.433163 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:01:59.471675 1418276 cri.go:89] found id: ""
	I1002 22:01:59.471702 1418276 logs.go:282] 0 containers: []
	W1002 22:01:59.471711 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:01:59.471718 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:01:59.471776 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:01:59.512588 1418276 cri.go:89] found id: ""
	I1002 22:01:59.512613 1418276 logs.go:282] 0 containers: []
	W1002 22:01:59.512621 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:01:59.512630 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:01:59.512641 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:01:59.556869 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:01:59.556898 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:01:59.696362 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:01:59.696441 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:01:59.717125 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:01:59.717152 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:01:59.539062 1430316 out.go:252] * Updating the running docker "pause-449722" container ...
	I1002 22:01:59.539144 1430316 machine.go:93] provisionDockerMachine start ...
	I1002 22:01:59.539251 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:01:59.566621 1430316 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:59.566958 1430316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34526 <nil> <nil>}
	I1002 22:01:59.566974 1430316 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:01:59.710201 1430316 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-449722
	
	I1002 22:01:59.710234 1430316 ubuntu.go:182] provisioning hostname "pause-449722"
	I1002 22:01:59.710306 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:01:59.737651 1430316 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:59.737952 1430316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34526 <nil> <nil>}
	I1002 22:01:59.737963 1430316 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-449722 && echo "pause-449722" | sudo tee /etc/hostname
	I1002 22:01:59.912204 1430316 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-449722
	
	I1002 22:01:59.912304 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:01:59.955802 1430316 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:59.956152 1430316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34526 <nil> <nil>}
	I1002 22:01:59.956175 1430316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-449722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-449722/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-449722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:02:00.314857 1430316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:02:00.314896 1430316 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:02:00.314928 1430316 ubuntu.go:190] setting up certificates
	I1002 22:02:00.314941 1430316 provision.go:84] configureAuth start
	I1002 22:02:00.315018 1430316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-449722
	I1002 22:02:00.382302 1430316 provision.go:143] copyHostCerts
	I1002 22:02:00.382386 1430316 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:02:00.382415 1430316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:02:00.382510 1430316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:02:00.382636 1430316 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:02:00.382647 1430316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:02:00.382677 1430316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:02:00.382744 1430316 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:02:00.382754 1430316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:02:00.382782 1430316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:02:00.382843 1430316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.pause-449722 san=[127.0.0.1 192.168.76.2 localhost minikube pause-449722]
	I1002 22:02:00.772862 1430316 provision.go:177] copyRemoteCerts
	I1002 22:02:00.772943 1430316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:02:00.772988 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:00.791689 1430316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:00.890351 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:02:00.909863 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 22:02:00.928872 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:02:00.947566 1430316 provision.go:87] duration metric: took 632.602548ms to configureAuth
	I1002 22:02:00.947590 1430316 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:02:00.947817 1430316 config.go:182] Loaded profile config "pause-449722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:02:00.947931 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:00.966778 1430316 main.go:141] libmachine: Using SSH client type: native
	I1002 22:02:00.967145 1430316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34526 <nil> <nil>}
	I1002 22:02:00.967162 1430316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1002 22:01:59.806679 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:01:59.806700 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:01:59.806713 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:01:59.863553 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:01:59.863630 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:01:59.942340 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:01:59.942381 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:01:59.978814 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:01:59.978901 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:02.750115 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:02:02.750584 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:02:02.750634 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:02.750697 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:02.777205 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:02.777228 1418276 cri.go:89] found id: ""
	I1002 22:02:02.777236 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:02:02.777293 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:02.781040 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:02.781211 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:02.807185 1418276 cri.go:89] found id: ""
	I1002 22:02:02.807208 1418276 logs.go:282] 0 containers: []
	W1002 22:02:02.807217 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:02:02.807223 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:02.807288 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:02.837424 1418276 cri.go:89] found id: ""
	I1002 22:02:02.837447 1418276 logs.go:282] 0 containers: []
	W1002 22:02:02.837456 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:02:02.837462 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:02.837520 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:02.864305 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:02.864327 1418276 cri.go:89] found id: ""
	I1002 22:02:02.864335 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:02:02.864391 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:02.868201 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:02.868275 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:02.893478 1418276 cri.go:89] found id: ""
	I1002 22:02:02.893503 1418276 logs.go:282] 0 containers: []
	W1002 22:02:02.893511 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:02:02.893518 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:02.893578 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:02.921217 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:02.921240 1418276 cri.go:89] found id: ""
	I1002 22:02:02.921249 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:02:02.921305 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:02.925077 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:02.925156 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:02.951022 1418276 cri.go:89] found id: ""
	I1002 22:02:02.951048 1418276 logs.go:282] 0 containers: []
	W1002 22:02:02.951057 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:02:02.951063 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:02.951123 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:02.977418 1418276 cri.go:89] found id: ""
	I1002 22:02:02.977442 1418276 logs.go:282] 0 containers: []
	W1002 22:02:02.977451 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:02:02.977459 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:02:02.977471 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:03.037540 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:02:03.037577 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:03.066331 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:03.066361 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:03.125544 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:02:03.125580 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:03.155950 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:03.155976 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:03.272687 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:03.272725 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:03.289120 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:03.289146 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:03.372734 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:03.372753 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:02:03.372766 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:06.348231 1430316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:02:06.348252 1430316 machine.go:96] duration metric: took 6.80909796s to provisionDockerMachine
	I1002 22:02:06.348264 1430316 start.go:293] postStartSetup for "pause-449722" (driver="docker")
	I1002 22:02:06.348274 1430316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:02:06.348342 1430316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:02:06.348399 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:06.382251 1430316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:06.491607 1430316 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:02:06.495944 1430316 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:02:06.495978 1430316 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:02:06.495989 1430316 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:02:06.496043 1430316 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:02:06.496122 1430316 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:02:06.496220 1430316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:02:06.505977 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:02:06.527636 1430316 start.go:296] duration metric: took 179.356563ms for postStartSetup
	I1002 22:02:06.527739 1430316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:02:06.527788 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:06.556678 1430316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:06.665234 1430316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:02:06.675769 1430316 fix.go:56] duration metric: took 7.159051636s for fixHost
	I1002 22:02:06.675795 1430316 start.go:83] releasing machines lock for "pause-449722", held for 7.159099791s
	I1002 22:02:06.675888 1430316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-449722
	I1002 22:02:06.701403 1430316 ssh_runner.go:195] Run: cat /version.json
	I1002 22:02:06.701423 1430316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:02:06.701456 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:06.701476 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:06.726190 1430316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:06.738221 1430316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:06.911418 1430316 ssh_runner.go:195] Run: systemctl --version
	I1002 22:02:06.918381 1430316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:02:06.960026 1430316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:02:06.964874 1430316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:02:06.964955 1430316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:02:06.973937 1430316 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:02:06.973958 1430316 start.go:495] detecting cgroup driver to use...
	I1002 22:02:06.973990 1430316 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:02:06.974079 1430316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:02:06.989483 1430316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:02:07.007299 1430316 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:02:07.007369 1430316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:02:07.023264 1430316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:02:07.036753 1430316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:02:07.175570 1430316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:02:07.317988 1430316 docker.go:234] disabling docker service ...
	I1002 22:02:07.318073 1430316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:02:07.333393 1430316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:02:07.346952 1430316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:02:07.485850 1430316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:02:07.623865 1430316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:02:07.637098 1430316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:02:07.652668 1430316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:02:07.652787 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.662985 1430316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:02:07.663084 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.672523 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.682196 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.691331 1430316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:02:07.700007 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.708985 1430316 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.717490 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.726622 1430316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:02:07.734138 1430316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:02:07.741673 1430316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:02:07.879021 1430316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:02:08.062147 1430316 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:02:08.062309 1430316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:02:08.067240 1430316 start.go:563] Will wait 60s for crictl version
	I1002 22:02:08.067355 1430316 ssh_runner.go:195] Run: which crictl
	I1002 22:02:08.071763 1430316 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:02:08.105895 1430316 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:02:08.106123 1430316 ssh_runner.go:195] Run: crio --version
	I1002 22:02:08.139919 1430316 ssh_runner.go:195] Run: crio --version
	I1002 22:02:08.172406 1430316 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:02:08.175444 1430316 cli_runner.go:164] Run: docker network inspect pause-449722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:02:08.192901 1430316 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:02:08.197186 1430316 kubeadm.go:883] updating cluster {Name:pause-449722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-449722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:02:08.197331 1430316 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:02:08.197385 1430316 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:02:08.231153 1430316 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:02:08.231177 1430316 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:02:08.231232 1430316 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:02:08.257729 1430316 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:02:08.257754 1430316 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:02:08.257763 1430316 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 22:02:08.257876 1430316 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-449722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-449722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:02:08.257963 1430316 ssh_runner.go:195] Run: crio config
	I1002 22:02:08.331872 1430316 cni.go:84] Creating CNI manager for ""
	I1002 22:02:08.331895 1430316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:02:08.331915 1430316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:02:08.331938 1430316 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-449722 NodeName:pause-449722 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:02:08.332071 1430316 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-449722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:02:08.332151 1430316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:02:08.340230 1430316 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:02:08.340300 1430316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:02:08.347935 1430316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 22:02:08.362875 1430316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:02:08.375761 1430316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1002 22:02:08.388462 1430316 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:02:08.392195 1430316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:02:08.535797 1430316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:02:08.552602 1430316 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722 for IP: 192.168.76.2
	I1002 22:02:08.552625 1430316 certs.go:195] generating shared ca certs ...
	I1002 22:02:08.552641 1430316 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:08.552808 1430316 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:02:08.552856 1430316 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:02:08.552867 1430316 certs.go:257] generating profile certs ...
	I1002 22:02:08.552956 1430316 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/client.key
	I1002 22:02:08.553029 1430316 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/apiserver.key.c04b0f76
	I1002 22:02:08.553066 1430316 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/proxy-client.key
	I1002 22:02:08.553204 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:02:08.553245 1430316 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:02:08.553258 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:02:08.553282 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:02:08.553312 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:02:08.553349 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:02:08.553415 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:02:08.554126 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:02:08.574203 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:02:08.592080 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:02:08.610050 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:02:08.628216 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 22:02:08.648113 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:02:08.666903 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:02:08.686358 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:02:08.710673 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:02:08.728358 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:02:08.746391 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:02:08.765205 1430316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:02:08.778018 1430316 ssh_runner.go:195] Run: openssl version
	I1002 22:02:08.784899 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:02:08.793588 1430316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:02:08.797444 1430316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:02:08.797538 1430316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:02:08.838801 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:02:08.847586 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:02:08.860703 1430316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:02:08.864818 1430316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:02:08.864887 1430316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:02:08.906222 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:02:08.914282 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:02:08.924325 1430316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:02:08.928830 1430316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:02:08.928909 1430316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:02:08.970240 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:02:08.981781 1430316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:02:08.985652 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:02:09.028202 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:02:09.070223 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:02:09.116783 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:02:09.174885 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:02:05.914562 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:02:05.915036 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:02:05.915097 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:05.915168 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:05.941634 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:05.941656 1418276 cri.go:89] found id: ""
	I1002 22:02:05.941664 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:02:05.941721 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:05.945502 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:05.945611 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:05.971759 1418276 cri.go:89] found id: ""
	I1002 22:02:05.971783 1418276 logs.go:282] 0 containers: []
	W1002 22:02:05.971791 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:02:05.971798 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:05.971856 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:05.997166 1418276 cri.go:89] found id: ""
	I1002 22:02:05.997190 1418276 logs.go:282] 0 containers: []
	W1002 22:02:05.997199 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:02:05.997206 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:05.997263 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:06.028332 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:06.028359 1418276 cri.go:89] found id: ""
	I1002 22:02:06.028368 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:02:06.028437 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:06.032547 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:06.032623 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:06.059845 1418276 cri.go:89] found id: ""
	I1002 22:02:06.059869 1418276 logs.go:282] 0 containers: []
	W1002 22:02:06.059878 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:02:06.059895 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:06.059971 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:06.087587 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:06.087614 1418276 cri.go:89] found id: ""
	I1002 22:02:06.087623 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:02:06.087700 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:06.091674 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:06.091752 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:06.121132 1418276 cri.go:89] found id: ""
	I1002 22:02:06.121155 1418276 logs.go:282] 0 containers: []
	W1002 22:02:06.121170 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:02:06.121181 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:06.121240 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:06.161679 1418276 cri.go:89] found id: ""
	I1002 22:02:06.161706 1418276 logs.go:282] 0 containers: []
	W1002 22:02:06.161721 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:02:06.161732 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:06.161743 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:06.248534 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:06.248551 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:02:06.248564 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:06.285181 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:02:06.285213 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:06.363486 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:02:06.363580 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:06.412747 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:06.412771 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:06.483461 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:02:06.483497 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:06.551884 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:06.551913 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:06.675568 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:06.675601 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:09.198130 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:02:09.198546 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:02:09.198586 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:09.198657 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:09.254887 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:09.254912 1418276 cri.go:89] found id: ""
	I1002 22:02:09.254925 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:02:09.254986 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:09.260155 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:09.260240 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:09.316079 1418276 cri.go:89] found id: ""
	I1002 22:02:09.316101 1418276 logs.go:282] 0 containers: []
	W1002 22:02:09.316110 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:02:09.316116 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:09.316190 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:09.372277 1418276 cri.go:89] found id: ""
	I1002 22:02:09.372300 1418276 logs.go:282] 0 containers: []
	W1002 22:02:09.372308 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:02:09.372315 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:09.372381 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:09.430195 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:09.430228 1418276 cri.go:89] found id: ""
	I1002 22:02:09.430238 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:02:09.430316 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:09.437220 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:09.437305 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:09.484260 1418276 cri.go:89] found id: ""
	I1002 22:02:09.484379 1418276 logs.go:282] 0 containers: []
	W1002 22:02:09.484403 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:02:09.484447 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:09.484572 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:09.525085 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:09.525181 1418276 cri.go:89] found id: ""
	I1002 22:02:09.525225 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:02:09.525352 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:09.534859 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:09.535051 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:09.612578 1418276 cri.go:89] found id: ""
	I1002 22:02:09.612676 1418276 logs.go:282] 0 containers: []
	W1002 22:02:09.612700 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:02:09.612737 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:09.612824 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:09.664653 1418276 cri.go:89] found id: ""
	I1002 22:02:09.664742 1418276 logs.go:282] 0 containers: []
	W1002 22:02:09.664764 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:02:09.664802 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:02:09.664838 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:09.732665 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:02:09.732760 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:09.289619 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:02:09.466638 1430316 kubeadm.go:400] StartCluster: {Name:pause-449722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-449722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:02:09.466757 1430316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:02:09.466833 1430316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:02:09.672361 1430316 cri.go:89] found id: "713a19e763d7a178a10c28fa8eeb30dfbd5bb39584d54662781fe7de08ed1508"
	I1002 22:02:09.672383 1430316 cri.go:89] found id: "d95817eb0545540da41b79aa299ac44bbdf51c90c190253f76134693bfc68110"
	I1002 22:02:09.672388 1430316 cri.go:89] found id: "a0fd68147fd14125fbf51aaf91faf2600d48d2658ab5eb4cddf8e56dfb570ba5"
	I1002 22:02:09.672392 1430316 cri.go:89] found id: "03c5cbc703625cd4a19470cd8ded95d449ef5493a4d596257f661e8675b3b0d2"
	I1002 22:02:09.672395 1430316 cri.go:89] found id: "7d066cc2d1e8459f5821b7e2796df942922d7d24410d2bd3519fab226daf4e37"
	I1002 22:02:09.672399 1430316 cri.go:89] found id: "eb6bcce816ee7f942f025c340187e101d9d7d3acc2bb3674588bea743c9a9cda"
	I1002 22:02:09.672412 1430316 cri.go:89] found id: "62db619520d973c09cde32d258e552287941877c1bec4ccbc118c0a24c051bdc"
	I1002 22:02:09.672415 1430316 cri.go:89] found id: "438fa54f734f4d94bf42128793810a6de0cd1720960c2ff8cd1d7300631fc51e"
	I1002 22:02:09.672418 1430316 cri.go:89] found id: "929ff0e798eeeb715e148515907bc2d9402bda656b33a794a6dc784f5fdcf30e"
	I1002 22:02:09.672426 1430316 cri.go:89] found id: "4d2e10670089995d9fcf18aa65a0b693d4b56c4e45f890dceea0a94112efda04"
	I1002 22:02:09.672430 1430316 cri.go:89] found id: "37a62fce49783ed1731b18e1bdd96cd63f9ac8414730a0deb92d32309b32d5f7"
	I1002 22:02:09.672433 1430316 cri.go:89] found id: ""
	I1002 22:02:09.672487 1430316 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:02:09.719346 1430316 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:02:09Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:02:09.719438 1430316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:02:09.744153 1430316 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:02:09.744181 1430316 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:02:09.744239 1430316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:02:09.765666 1430316 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:02:09.766375 1430316 kubeconfig.go:125] found "pause-449722" server: "https://192.168.76.2:8443"
	I1002 22:02:09.767211 1430316 kapi.go:59] client config for pause-449722: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/client.key", CAFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 22:02:09.767695 1430316 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 22:02:09.767722 1430316 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 22:02:09.767729 1430316 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 22:02:09.767734 1430316 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 22:02:09.767739 1430316 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 22:02:09.768063 1430316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:02:09.795277 1430316 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 22:02:09.795308 1430316 kubeadm.go:601] duration metric: took 51.122221ms to restartPrimaryControlPlane
	I1002 22:02:09.795318 1430316 kubeadm.go:402] duration metric: took 328.690318ms to StartCluster
	I1002 22:02:09.795333 1430316 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:09.795404 1430316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:02:09.796244 1430316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:09.796461 1430316 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:02:09.796831 1430316 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:02:09.796979 1430316 config.go:182] Loaded profile config "pause-449722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:02:09.800001 1430316 out.go:179] * Enabled addons: 
	I1002 22:02:09.800102 1430316 out.go:179] * Verifying Kubernetes components...
	I1002 22:02:09.803152 1430316 addons.go:514] duration metric: took 6.301667ms for enable addons: enabled=[]
	I1002 22:02:09.803311 1430316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:02:10.213213 1430316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:02:10.238330 1430316 node_ready.go:35] waiting up to 6m0s for node "pause-449722" to be "Ready" ...
	I1002 22:02:13.802256 1430316 node_ready.go:49] node "pause-449722" is "Ready"
	I1002 22:02:13.802283 1430316 node_ready.go:38] duration metric: took 3.563906225s for node "pause-449722" to be "Ready" ...
	I1002 22:02:13.802297 1430316 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:02:13.802360 1430316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:02:13.823442 1430316 api_server.go:72] duration metric: took 4.026943583s to wait for apiserver process to appear ...
	I1002 22:02:13.823463 1430316 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:02:13.823483 1430316 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:13.838372 1430316 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 22:02:13.838395 1430316 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 22:02:09.845253 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:02:09.845296 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:09.893790 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:09.893819 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:09.993106 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:02:09.993144 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:10.070702 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:10.070735 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:10.218144 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:10.218188 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:10.264151 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:10.264188 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:10.402616 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:12.903243 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:02:12.903638 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:02:12.903687 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:12.903749 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:12.933563 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:12.933586 1418276 cri.go:89] found id: ""
	I1002 22:02:12.933595 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:02:12.933655 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:12.938071 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:12.938154 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:12.967157 1418276 cri.go:89] found id: ""
	I1002 22:02:12.967182 1418276 logs.go:282] 0 containers: []
	W1002 22:02:12.967191 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:02:12.967198 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:12.967259 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:13.007858 1418276 cri.go:89] found id: ""
	I1002 22:02:13.007886 1418276 logs.go:282] 0 containers: []
	W1002 22:02:13.007895 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:02:13.007902 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:13.007966 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:13.057253 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:13.057277 1418276 cri.go:89] found id: ""
	I1002 22:02:13.057285 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:02:13.057350 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:13.061385 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:13.061463 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:13.095021 1418276 cri.go:89] found id: ""
	I1002 22:02:13.095046 1418276 logs.go:282] 0 containers: []
	W1002 22:02:13.095055 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:02:13.095062 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:13.095125 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:13.135966 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:13.135990 1418276 cri.go:89] found id: ""
	I1002 22:02:13.135998 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:02:13.136058 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:13.142848 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:13.142930 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:13.190720 1418276 cri.go:89] found id: ""
	I1002 22:02:13.190746 1418276 logs.go:282] 0 containers: []
	W1002 22:02:13.190755 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:02:13.190762 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:13.190824 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:13.224778 1418276 cri.go:89] found id: ""
	I1002 22:02:13.224803 1418276 logs.go:282] 0 containers: []
	W1002 22:02:13.224811 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:02:13.224821 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:02:13.224833 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:13.274105 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:02:13.274144 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:13.365879 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:02:13.365915 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:13.419829 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:13.419859 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:13.508518 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:02:13.508564 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:13.585473 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:13.585504 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:13.732735 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:13.732772 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:13.767439 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:13.767468 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:13.898117 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:14.323858 1430316 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:14.333395 1430316 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 22:02:14.333429 1430316 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 22:02:14.823589 1430316 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:14.833077 1430316 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 22:02:14.835136 1430316 api_server.go:141] control plane version: v1.34.1
	I1002 22:02:14.835209 1430316 api_server.go:131] duration metric: took 1.011738692s to wait for apiserver health ...
	I1002 22:02:14.835233 1430316 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:02:14.838758 1430316 system_pods.go:59] 7 kube-system pods found
	I1002 22:02:14.838789 1430316 system_pods.go:61] "coredns-66bc5c9577-dzf4t" [f4e8dae8-7cc8-475e-8da2-b04e1cea5aed] Running
	I1002 22:02:14.838796 1430316 system_pods.go:61] "etcd-pause-449722" [8b49eca8-0b72-4049-bb6a-a4783e041caa] Running
	I1002 22:02:14.838802 1430316 system_pods.go:61] "kindnet-lbrbm" [c1ae6269-d077-4adb-9511-fe7466fd8e15] Running
	I1002 22:02:14.838806 1430316 system_pods.go:61] "kube-apiserver-pause-449722" [531bbc4e-f650-46c4-8fe3-699cc5a02c5b] Running
	I1002 22:02:14.838810 1430316 system_pods.go:61] "kube-controller-manager-pause-449722" [034e4449-3f02-4c78-9b0f-c2e83a75a180] Running
	I1002 22:02:14.838815 1430316 system_pods.go:61] "kube-proxy-mm5sk" [5dc0b155-d08a-4459-834e-2be1aabc4aa7] Running
	I1002 22:02:14.838821 1430316 system_pods.go:61] "kube-scheduler-pause-449722" [69112277-9eb1-4849-a2dd-d174886284fa] Running
	I1002 22:02:14.838826 1430316 system_pods.go:74] duration metric: took 3.576541ms to wait for pod list to return data ...
	I1002 22:02:14.838833 1430316 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:02:14.845098 1430316 default_sa.go:45] found service account: "default"
	I1002 22:02:14.845121 1430316 default_sa.go:55] duration metric: took 6.28168ms for default service account to be created ...
	I1002 22:02:14.845131 1430316 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:02:14.849498 1430316 system_pods.go:86] 7 kube-system pods found
	I1002 22:02:14.849573 1430316 system_pods.go:89] "coredns-66bc5c9577-dzf4t" [f4e8dae8-7cc8-475e-8da2-b04e1cea5aed] Running
	I1002 22:02:14.849596 1430316 system_pods.go:89] "etcd-pause-449722" [8b49eca8-0b72-4049-bb6a-a4783e041caa] Running
	I1002 22:02:14.849624 1430316 system_pods.go:89] "kindnet-lbrbm" [c1ae6269-d077-4adb-9511-fe7466fd8e15] Running
	I1002 22:02:14.849653 1430316 system_pods.go:89] "kube-apiserver-pause-449722" [531bbc4e-f650-46c4-8fe3-699cc5a02c5b] Running
	I1002 22:02:14.849671 1430316 system_pods.go:89] "kube-controller-manager-pause-449722" [034e4449-3f02-4c78-9b0f-c2e83a75a180] Running
	I1002 22:02:14.849691 1430316 system_pods.go:89] "kube-proxy-mm5sk" [5dc0b155-d08a-4459-834e-2be1aabc4aa7] Running
	I1002 22:02:14.849710 1430316 system_pods.go:89] "kube-scheduler-pause-449722" [69112277-9eb1-4849-a2dd-d174886284fa] Running
	I1002 22:02:14.849740 1430316 system_pods.go:126] duration metric: took 4.602204ms to wait for k8s-apps to be running ...
	I1002 22:02:14.849762 1430316 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:02:14.849860 1430316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:02:14.864274 1430316 system_svc.go:56] duration metric: took 14.502056ms WaitForService to wait for kubelet
	I1002 22:02:14.864348 1430316 kubeadm.go:586] duration metric: took 5.067854186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:02:14.864384 1430316 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:02:14.873882 1430316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:02:14.873910 1430316 node_conditions.go:123] node cpu capacity is 2
	I1002 22:02:14.873923 1430316 node_conditions.go:105] duration metric: took 9.519872ms to run NodePressure ...
	I1002 22:02:14.873955 1430316 start.go:241] waiting for startup goroutines ...
	I1002 22:02:14.873970 1430316 start.go:246] waiting for cluster config update ...
	I1002 22:02:14.873980 1430316 start.go:255] writing updated cluster config ...
	I1002 22:02:14.874367 1430316 ssh_runner.go:195] Run: rm -f paused
	I1002 22:02:14.879309 1430316 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:02:14.879978 1430316 kapi.go:59] client config for pause-449722: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/client.key", CAFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 22:02:14.883735 1430316 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dzf4t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.895815 1430316 pod_ready.go:94] pod "coredns-66bc5c9577-dzf4t" is "Ready"
	I1002 22:02:14.895848 1430316 pod_ready.go:86] duration metric: took 12.08569ms for pod "coredns-66bc5c9577-dzf4t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.898504 1430316 pod_ready.go:83] waiting for pod "etcd-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.902671 1430316 pod_ready.go:94] pod "etcd-pause-449722" is "Ready"
	I1002 22:02:14.902698 1430316 pod_ready.go:86] duration metric: took 4.169604ms for pod "etcd-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.904737 1430316 pod_ready.go:83] waiting for pod "kube-apiserver-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.910722 1430316 pod_ready.go:94] pod "kube-apiserver-pause-449722" is "Ready"
	I1002 22:02:14.910754 1430316 pod_ready.go:86] duration metric: took 5.989944ms for pod "kube-apiserver-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.912894 1430316 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:15.282726 1430316 pod_ready.go:94] pod "kube-controller-manager-pause-449722" is "Ready"
	I1002 22:02:15.282756 1430316 pod_ready.go:86] duration metric: took 369.837004ms for pod "kube-controller-manager-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:15.484381 1430316 pod_ready.go:83] waiting for pod "kube-proxy-mm5sk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:15.883391 1430316 pod_ready.go:94] pod "kube-proxy-mm5sk" is "Ready"
	I1002 22:02:15.883424 1430316 pod_ready.go:86] duration metric: took 399.016661ms for pod "kube-proxy-mm5sk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:16.083213 1430316 pod_ready.go:83] waiting for pod "kube-scheduler-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:16.483510 1430316 pod_ready.go:94] pod "kube-scheduler-pause-449722" is "Ready"
	I1002 22:02:16.483584 1430316 pod_ready.go:86] duration metric: took 400.34599ms for pod "kube-scheduler-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:16.483610 1430316 pod_ready.go:40] duration metric: took 1.604254897s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:02:16.563674 1430316 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:02:16.568922 1430316 out.go:179] * Done! kubectl is now configured to use "pause-449722" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.677593804Z" level=info msg="Started container" PID=2314 containerID=d95817eb0545540da41b79aa299ac44bbdf51c90c190253f76134693bfc68110 description=kube-system/kube-scheduler-pause-449722/kube-scheduler id=202e855e-aabd-4d81-93bc-7ac86927dbd6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb1e45df8457161f853f53f5060cf588c73074f0321281287d240d881005114d
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.794226762Z" level=info msg="Created container 5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c: kube-system/etcd-pause-449722/etcd" id=ea8b7232-bcdc-4461-85a0-29ea816aeb60 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.798239266Z" level=info msg="Starting container: 5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c" id=6ed853c1-4b6a-49ae-bab9-65c7392ac0bb name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.800357134Z" level=info msg="Created container c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4: kube-system/kube-apiserver-pause-449722/kube-apiserver" id=35ba55e3-f10d-453f-a192-fb22a2ce048d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.802283769Z" level=info msg="Starting container: c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4" id=f0894e50-c2ce-4bfd-8589-a0b8cfdec04f name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.80432881Z" level=info msg="Started container" PID=2348 containerID=5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c description=kube-system/etcd-pause-449722/etcd id=6ed853c1-4b6a-49ae-bab9-65c7392ac0bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ba00690ed8465de0495380e9e7ffd47a39e38b64673b4bd9d00620773c160f7
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.811694695Z" level=info msg="Started container" PID=2351 containerID=c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4 description=kube-system/kube-apiserver-pause-449722/kube-apiserver id=f0894e50-c2ce-4bfd-8589-a0b8cfdec04f name=/runtime.v1.RuntimeService/StartContainer sandboxID=29e0d57718d5232c213405e215d0dfbf2a6954c1b149dfd8678d8672f248acb4
	Oct 02 22:02:10 pause-449722 crio[2074]: time="2025-10-02T22:02:10.510586655Z" level=info msg="Created container 5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62: kube-system/kube-proxy-mm5sk/kube-proxy" id=3af6d83d-9bdb-4d12-a2aa-ff0823f5a37d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:02:10 pause-449722 crio[2074]: time="2025-10-02T22:02:10.511757702Z" level=info msg="Starting container: 5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62" id=942346e0-2a02-404d-9336-6017bdaa6934 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:02:10 pause-449722 crio[2074]: time="2025-10-02T22:02:10.515419057Z" level=info msg="Started container" PID=2366 containerID=5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62 description=kube-system/kube-proxy-mm5sk/kube-proxy id=942346e0-2a02-404d-9336-6017bdaa6934 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a5b4bedf7c9a5e7fb5b64abb989c241de7f326e1ae52a633173190524bd20b9
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.811292883Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.816208032Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.816382929Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.816465159Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.827420639Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.827577969Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.82765028Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.8338617Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.834004344Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.834272277Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.842418538Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.842465683Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.842488625Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.845847561Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.8458802Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	5f49cb654c126       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   10 seconds ago       Running             kube-proxy                1                   4a5b4bedf7c9a       kube-proxy-mm5sk                       kube-system
	5b238744636e8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago       Running             etcd                      1                   4ba00690ed846       etcd-pause-449722                      kube-system
	c6db0a9c33ac8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago       Running             kube-apiserver            1                   29e0d57718d52       kube-apiserver-pause-449722            kube-system
	713a19e763d7a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago       Running             kube-controller-manager   1                   96769c3c8d441       kube-controller-manager-pause-449722   kube-system
	d95817eb05455       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago       Running             kube-scheduler            1                   fb1e45df84571       kube-scheduler-pause-449722            kube-system
	a0fd68147fd14       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   11 seconds ago       Running             coredns                   1                   1544ed896d489       coredns-66bc5c9577-dzf4t               kube-system
	03c5cbc703625       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   11 seconds ago       Running             kindnet-cni               1                   d2ee30865175f       kindnet-lbrbm                          kube-system
	7d066cc2d1e84       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Exited              coredns                   0                   1544ed896d489       coredns-66bc5c9577-dzf4t               kube-system
	eb6bcce816ee7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   d2ee30865175f       kindnet-lbrbm                          kube-system
	62db619520d97       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   4a5b4bedf7c9a       kube-proxy-mm5sk                       kube-system
	438fa54f734f4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   4ba00690ed846       etcd-pause-449722                      kube-system
	929ff0e798eee       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   fb1e45df84571       kube-scheduler-pause-449722            kube-system
	4d2e106700899       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   96769c3c8d441       kube-controller-manager-pause-449722   kube-system
	37a62fce49783       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   29e0d57718d52       kube-apiserver-pause-449722            kube-system
	
	
	==> coredns [7d066cc2d1e8459f5821b7e2796df942922d7d24410d2bd3519fab226daf4e37] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60658 - 53359 "HINFO IN 7641627972224923487.8635105253622644934. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025330309s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a0fd68147fd14125fbf51aaf91faf2600d48d2658ab5eb4cddf8e56dfb570ba5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37585 - 49754 "HINFO IN 5796983640354494215.2582439586429005221. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012420412s
	
	
	==> describe nodes <==
	Name:               pause-449722
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-449722
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=pause-449722
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_01_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:01:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-449722
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:02:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:02:10 +0000   Thu, 02 Oct 2025 22:01:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:02:10 +0000   Thu, 02 Oct 2025 22:01:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:02:10 +0000   Thu, 02 Oct 2025 22:01:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:02:10 +0000   Thu, 02 Oct 2025 22:01:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-449722
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ceaef681a1c43b0b63aace3800bed84
	  System UUID:                fe300808-6cad-40f8-b458-73bbea55969b
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-dzf4t                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     66s
	  kube-system                 etcd-pause-449722                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         73s
	  kube-system                 kindnet-lbrbm                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      66s
	  kube-system                 kube-apiserver-pause-449722             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-pause-449722    200m (10%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-proxy-mm5sk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-scheduler-pause-449722             100m (5%)     0 (0%)      0 (0%)           0 (0%)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 65s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node pause-449722 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node pause-449722 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s (x8 over 79s)  kubelet          Node pause-449722 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 72s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  71s                kubelet          Node pause-449722 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s                kubelet          Node pause-449722 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s                kubelet          Node pause-449722 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           67s                node-controller  Node pause-449722 event: Registered Node pause-449722 in Controller
	  Normal   NodeReady                25s                kubelet          Node pause-449722 status is now: NodeReady
	  Normal   RegisteredNode           4s                 node-controller  Node pause-449722 event: Registered Node pause-449722 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[ +41.246514] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[  +2.995481] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:37] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [438fa54f734f4d94bf42128793810a6de0cd1720960c2ff8cd1d7300631fc51e] <==
	{"level":"warn","ts":"2025-10-02T22:01:04.771278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:04.802558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:04.826792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:04.870443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:04.906445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:04.953834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:05.103912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48576","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T22:02:01.136229Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T22:02:01.136288Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-449722","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-02T22:02:01.137076Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T22:02:01.275405Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T22:02:01.275473Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T22:02:01.275494Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-02T22:02:01.275546Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T22:02:01.275602Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T22:02:01.275673Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T22:02:01.275691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T22:02:01.275678Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T22:02:01.275732Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T22:02:01.275796Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T22:02:01.275807Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T22:02:01.279090Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-02T22:02:01.279173Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T22:02:01.279215Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T22:02:01.279273Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-449722","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c] <==
	{"level":"warn","ts":"2025-10-02T22:02:12.153449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.176774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.190831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.210379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.226769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.238631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.262222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.273608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.289953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.308775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.325029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.342550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.388237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.390236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.408272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.427070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.447560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.470622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.483204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.497031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.520232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.554862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.563057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.593258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.689896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34580","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:02:20 up  6:44,  0 user,  load average: 3.81, 3.34, 2.55
	Linux pause-449722 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03c5cbc703625cd4a19470cd8ded95d449ef5493a4d596257f661e8675b3b0d2] <==
	I1002 22:02:09.509052       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:02:09.528772       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 22:02:09.528918       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:02:09.528930       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:02:09.528947       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:02:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1002 22:02:09.807132       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:02:09.807227       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:02:09.807288       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 22:02:09.807366       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 22:02:09.807395       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:02:09.807404       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:02:09.807414       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:02:09.807524       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 22:02:14.107794       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:02:14.107835       1 metrics.go:72] Registering metrics
	I1002 22:02:14.107905       1 controller.go:711] "Syncing nftables rules"
	I1002 22:02:19.810919       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:02:19.810978       1 main.go:301] handling current node
	
	
	==> kindnet [eb6bcce816ee7f942f025c340187e101d9d7d3acc2bb3674588bea743c9a9cda] <==
	I1002 22:01:15.308871       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:01:15.309112       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 22:01:15.309233       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:01:15.309244       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:01:15.309257       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:01:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:01:15.508498       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:01:15.508583       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:01:15.508630       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:01:15.508785       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:01:45.509453       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:01:45.509455       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:01:45.509564       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 22:01:45.509570       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 22:01:47.109100       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:01:47.109260       1 metrics.go:72] Registering metrics
	I1002 22:01:47.109321       1 controller.go:711] "Syncing nftables rules"
	I1002 22:01:55.512602       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:01:55.512660       1 main.go:301] handling current node
	
	
	==> kube-apiserver [37a62fce49783ed1731b18e1bdd96cd63f9ac8414730a0deb92d32309b32d5f7] <==
	W1002 22:02:01.156662       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.156777       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.156867       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.156958       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157069       1 logging.go:55] [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157162       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157249       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157342       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157436       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157524       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159362       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159480       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159512       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159586       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159644       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159695       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159732       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159795       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159850       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159880       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159704       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.160014       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159851       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.160094       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159965       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4] <==
	I1002 22:02:13.416434       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1002 22:02:13.416453       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1002 22:02:13.957664       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 22:02:13.957779       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 22:02:13.957845       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:02:13.968773       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 22:02:13.968835       1 policy_source.go:240] refreshing policies
	I1002 22:02:13.975272       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:02:13.985715       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 22:02:13.986556       1 aggregator.go:171] initial CRD sync complete...
	I1002 22:02:13.986616       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 22:02:13.986646       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:02:13.986675       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:02:13.988013       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:02:13.988508       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 22:02:13.997889       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:02:14.018315       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 22:02:14.019222       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 22:02:14.044618       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 22:02:14.044832       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 22:02:14.049224       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 22:02:14.051572       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:02:14.051678       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:02:14.423283       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:02:14.814430       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-controller-manager [4d2e10670089995d9fcf18aa65a0b693d4b56c4e45f890dceea0a94112efda04] <==
	I1002 22:01:13.166287       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:01:13.169185       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 22:01:13.173679       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:01:13.182220       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:01:13.190576       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:01:13.190726       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:01:13.190753       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 22:01:13.190728       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:01:13.190802       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:01:13.190814       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 22:01:13.191888       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 22:01:13.192028       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 22:01:13.193235       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 22:01:13.193235       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:01:13.193255       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 22:01:13.193266       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 22:01:13.194151       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 22:01:13.193530       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 22:01:13.194668       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 22:01:13.194679       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 22:01:13.196273       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 22:01:13.199951       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:01:13.207155       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 22:01:13.210422       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 22:01:58.379426       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [713a19e763d7a178a10c28fa8eeb30dfbd5bb39584d54662781fe7de08ed1508] <==
	I1002 22:02:16.206239       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 22:02:16.221797       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 22:02:16.231004       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 22:02:16.231102       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 22:02:16.231235       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:02:16.232414       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 22:02:16.232463       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 22:02:16.232453       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 22:02:16.232613       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 22:02:16.232734       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-449722"
	I1002 22:02:16.232795       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 22:02:16.232523       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:02:16.232533       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:02:16.233812       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:02:16.232436       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 22:02:16.233898       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 22:02:16.233930       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 22:02:16.233084       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 22:02:16.238090       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 22:02:16.240385       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 22:02:16.241698       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:02:16.242114       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 22:02:16.245073       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 22:02:16.248363       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:02:16.252608       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	
	
	==> kube-proxy [5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62] <==
	I1002 22:02:12.283210       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:02:12.875929       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:02:14.079165       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:02:14.079286       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 22:02:14.079412       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:02:14.119638       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:02:14.119754       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:02:14.131191       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:02:14.131509       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:02:14.131530       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:02:14.132691       1 config.go:200] "Starting service config controller"
	I1002 22:02:14.132783       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:02:14.140944       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:02:14.141023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:02:14.142005       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:02:14.146275       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:02:14.142497       1 config.go:309] "Starting node config controller"
	I1002 22:02:14.146292       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:02:14.146298       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:02:14.233341       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:02:14.246663       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:02:14.246673       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [62db619520d973c09cde32d258e552287941877c1bec4ccbc118c0a24c051bdc] <==
	I1002 22:01:15.280233       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:01:15.416067       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:01:15.521829       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:01:15.521864       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 22:01:15.521939       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:01:15.539255       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:01:15.539308       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:01:15.542759       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:01:15.543063       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:01:15.543136       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:01:15.546559       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:01:15.546582       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:01:15.546856       1 config.go:200] "Starting service config controller"
	I1002 22:01:15.546880       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:01:15.547298       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:01:15.554727       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:01:15.554830       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:01:15.547721       1 config.go:309] "Starting node config controller"
	I1002 22:01:15.554919       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:01:15.554947       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:01:15.647136       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:01:15.647143       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [929ff0e798eeeb715e148515907bc2d9402bda656b33a794a6dc784f5fdcf30e] <==
	E1002 22:01:07.517452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 22:01:07.518517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 22:01:07.518896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 22:01:07.518987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 22:01:07.519081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 22:01:07.519137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 22:01:07.519198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 22:01:07.519288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 22:01:07.519358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 22:01:07.519482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 22:01:07.519515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 22:01:07.519552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 22:01:07.519595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 22:01:07.519634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 22:01:07.519687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 22:01:07.519766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 22:01:07.519806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 22:01:07.519861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1002 22:01:08.997477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:02:01.123618       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 22:02:01.123651       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 22:02:01.123681       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 22:02:01.123706       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:02:01.123988       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 22:02:01.124011       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d95817eb0545540da41b79aa299ac44bbdf51c90c190253f76134693bfc68110] <==
	I1002 22:02:13.899815       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:02:13.904375       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:02:13.906877       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 22:02:13.906933       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:02:13.914259       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 22:02:13.930527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 22:02:13.930713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 22:02:13.930808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 22:02:13.930929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 22:02:13.931021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 22:02:13.931145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 22:02:13.931247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 22:02:13.931348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 22:02:13.931447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 22:02:13.931543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 22:02:13.931743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 22:02:13.931909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 22:02:13.932076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 22:02:13.932166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 22:02:13.932255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 22:02:13.932358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 22:02:13.932420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 22:02:13.932462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 22:02:13.934531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 22:02:15.115783       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.784945    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="a5114854e9985ad0bf13b7164e3eba60" pod="kube-system/kube-scheduler-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.806959    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-mm5sk\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="5dc0b155-d08a-4459-834e-2be1aabc4aa7" pod="kube-system/kube-proxy-mm5sk"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.831251    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-lbrbm\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="c1ae6269-d077-4adb-9511-fe7466fd8e15" pod="kube-system/kindnet-lbrbm"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.844431    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-dzf4t\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="f4e8dae8-7cc8-475e-8da2-b04e1cea5aed" pod="kube-system/coredns-66bc5c9577-dzf4t"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.868936    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="ba05bb8896f50a1cdb070df356a28cd5" pod="kube-system/etcd-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.876269    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="0744e62811fd4068ce33b544ef097fad" pod="kube-system/kube-controller-manager-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.886390    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="ba05bb8896f50a1cdb070df356a28cd5" pod="kube-system/etcd-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.915581    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="0744e62811fd4068ce33b544ef097fad" pod="kube-system/kube-controller-manager-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.934119    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="1fd5b59490a04240a195f6691e76ab60" pod="kube-system/kube-apiserver-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.941699    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="a5114854e9985ad0bf13b7164e3eba60" pod="kube-system/kube-scheduler-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.945776    1314 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         pods "kube-proxy-mm5sk" is forbidden: User "system:node:pause-449722" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-449722' and this object
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	Oct 02 22:02:13 pause-449722 kubelet[1314]:  > podUID="5dc0b155-d08a-4459-834e-2be1aabc4aa7" pod="kube-system/kube-proxy-mm5sk"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.948178    1314 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         pods "kindnet-lbrbm" is forbidden: User "system:node:pause-449722" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-449722' and this object
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	Oct 02 22:02:13 pause-449722 kubelet[1314]:  > podUID="c1ae6269-d077-4adb-9511-fe7466fd8e15" pod="kube-system/kindnet-lbrbm"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.986293    1314 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         pods "coredns-66bc5c9577-dzf4t" is forbidden: User "system:node:pause-449722" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-449722' and this object
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	Oct 02 22:02:13 pause-449722 kubelet[1314]:  > podUID="f4e8dae8-7cc8-475e-8da2-b04e1cea5aed" pod="kube-system/coredns-66bc5c9577-dzf4t"
	Oct 02 22:02:17 pause-449722 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:02:17 pause-449722 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:02:17 pause-449722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-449722 -n pause-449722
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-449722 -n pause-449722: exit status 2 (347.76718ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-449722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-449722
helpers_test.go:243: (dbg) docker inspect pause-449722:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd",
	        "Created": "2025-10-02T22:00:42.346436109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1426392,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:00:42.408851396Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd/hosts",
	        "LogPath": "/var/lib/docker/containers/3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd/3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd-json.log",
	        "Name": "/pause-449722",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-449722:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-449722",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3851e1deb0c5763661a49802ef8efcf61c98021ed654e9b27437dea5417614cd",
	                "LowerDir": "/var/lib/docker/overlay2/2a040f38fedb197ef5765eac719121cda0b16484d7b176fb92c08740f74716b3-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a040f38fedb197ef5765eac719121cda0b16484d7b176fb92c08740f74716b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a040f38fedb197ef5765eac719121cda0b16484d7b176fb92c08740f74716b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a040f38fedb197ef5765eac719121cda0b16484d7b176fb92c08740f74716b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-449722",
	                "Source": "/var/lib/docker/volumes/pause-449722/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-449722",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-449722",
	                "name.minikube.sigs.k8s.io": "pause-449722",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c8f56f37f34e3493ea2eb23a4219a2d5dad53c93d1fb4fb1bbd7aa431671faa",
	            "SandboxKey": "/var/run/docker/netns/8c8f56f37f34",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34526"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34527"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34528"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34529"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-449722": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:03:48:d2:c2:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "93e92c2d024dd4a9175c10b5adf12a14b699e27f18e62dfb5c2bdcd9fcdd0167",
	                    "EndpointID": "47b7dcf99c5e870c4ef1af66d8b61c00e758633d2a8ae94b09a855a9019b4797",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-449722",
	                        "3851e1deb0c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-449722 -n pause-449722
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-449722 -n pause-449722: exit status 2 (352.410581ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-449722 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-449722 logs -n 25: (1.484099428s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-732300 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ start   │ -p missing-upgrade-385082 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-385082    │ jenkins │ v1.32.0 │ 02 Oct 25 21:56 UTC │ 02 Oct 25 21:57 UTC │
	│ start   │ -p NoKubernetes-732300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:58 UTC │
	│ start   │ -p missing-upgrade-385082 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-385082    │ jenkins │ v1.37.0 │ 02 Oct 25 21:57 UTC │ 02 Oct 25 21:58 UTC │
	│ delete  │ -p NoKubernetes-732300                                                                                                                   │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ start   │ -p NoKubernetes-732300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ ssh     │ -p NoKubernetes-732300 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │                     │
	│ stop    │ -p NoKubernetes-732300                                                                                                                   │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ start   │ -p NoKubernetes-732300 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ delete  │ -p missing-upgrade-385082                                                                                                                │ missing-upgrade-385082    │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ start   │ -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-186867 │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:59 UTC │
	│ ssh     │ -p NoKubernetes-732300 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │                     │
	│ delete  │ -p NoKubernetes-732300                                                                                                                   │ NoKubernetes-732300       │ jenkins │ v1.37.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:58 UTC │
	│ start   │ -p stopped-upgrade-679793 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-679793    │ jenkins │ v1.32.0 │ 02 Oct 25 21:58 UTC │ 02 Oct 25 21:59 UTC │
	│ stop    │ -p kubernetes-upgrade-186867                                                                                                             │ kubernetes-upgrade-186867 │ jenkins │ v1.37.0 │ 02 Oct 25 21:59 UTC │ 02 Oct 25 21:59 UTC │
	│ start   │ -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-186867 │ jenkins │ v1.37.0 │ 02 Oct 25 21:59 UTC │                     │
	│ stop    │ stopped-upgrade-679793 stop                                                                                                              │ stopped-upgrade-679793    │ jenkins │ v1.32.0 │ 02 Oct 25 21:59 UTC │ 02 Oct 25 21:59 UTC │
	│ start   │ -p stopped-upgrade-679793 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-679793    │ jenkins │ v1.37.0 │ 02 Oct 25 21:59 UTC │ 02 Oct 25 21:59 UTC │
	│ delete  │ -p stopped-upgrade-679793                                                                                                                │ stopped-upgrade-679793    │ jenkins │ v1.37.0 │ 02 Oct 25 21:59 UTC │ 02 Oct 25 21:59 UTC │
	│ start   │ -p running-upgrade-578747 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-578747    │ jenkins │ v1.32.0 │ 02 Oct 25 21:59 UTC │ 02 Oct 25 22:00 UTC │
	│ start   │ -p running-upgrade-578747 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-578747    │ jenkins │ v1.37.0 │ 02 Oct 25 22:00 UTC │ 02 Oct 25 22:00 UTC │
	│ delete  │ -p running-upgrade-578747                                                                                                                │ running-upgrade-578747    │ jenkins │ v1.37.0 │ 02 Oct 25 22:00 UTC │ 02 Oct 25 22:00 UTC │
	│ start   │ -p pause-449722 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-449722              │ jenkins │ v1.37.0 │ 02 Oct 25 22:00 UTC │ 02 Oct 25 22:01 UTC │
	│ start   │ -p pause-449722 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-449722              │ jenkins │ v1.37.0 │ 02 Oct 25 22:01 UTC │ 02 Oct 25 22:02 UTC │
	│ pause   │ -p pause-449722 --alsologtostderr -v=5                                                                                                   │ pause-449722              │ jenkins │ v1.37.0 │ 02 Oct 25 22:02 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:01:59
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:01:59.226431 1430316 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:01:59.226547 1430316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:01:59.226558 1430316 out.go:374] Setting ErrFile to fd 2...
	I1002 22:01:59.226563 1430316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:01:59.226812 1430316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:01:59.227172 1430316 out.go:368] Setting JSON to false
	I1002 22:01:59.228106 1430316 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24245,"bootTime":1759418275,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:01:59.228171 1430316 start.go:140] virtualization:  
	I1002 22:01:59.232574 1430316 out.go:179] * [pause-449722] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:01:59.237434 1430316 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:01:59.237509 1430316 notify.go:220] Checking for updates...
	I1002 22:01:59.243683 1430316 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:01:59.246736 1430316 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:01:59.249682 1430316 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:01:59.252488 1430316 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:01:59.255411 1430316 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:01:59.258753 1430316 config.go:182] Loaded profile config "pause-449722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:01:59.259368 1430316 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:01:59.294599 1430316 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:01:59.294707 1430316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:01:59.384834 1430316 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:01:59.374551717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:01:59.384974 1430316 docker.go:318] overlay module found
	I1002 22:01:59.388177 1430316 out.go:179] * Using the docker driver based on existing profile
	I1002 22:01:59.392002 1430316 start.go:304] selected driver: docker
	I1002 22:01:59.392020 1430316 start.go:924] validating driver "docker" against &{Name:pause-449722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-449722 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:01:59.392149 1430316 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:01:59.392248 1430316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:01:59.476838 1430316 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:01:59.466168518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:01:59.477248 1430316 cni.go:84] Creating CNI manager for ""
	I1002 22:01:59.477319 1430316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:01:59.477367 1430316 start.go:348] cluster config:
	{Name:pause-449722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-449722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:01:59.483156 1430316 out.go:179] * Starting "pause-449722" primary control-plane node in "pause-449722" cluster
	I1002 22:01:59.485993 1430316 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:01:59.488898 1430316 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:01:59.491781 1430316 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:01:59.491842 1430316 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:01:59.491860 1430316 cache.go:58] Caching tarball of preloaded images
	I1002 22:01:59.491951 1430316 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:01:59.491966 1430316 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:01:59.492109 1430316 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/config.json ...
	I1002 22:01:59.492341 1430316 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:01:59.516548 1430316 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:01:59.516575 1430316 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:01:59.516596 1430316 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:01:59.516618 1430316 start.go:360] acquireMachinesLock for pause-449722: {Name:mk9f4e85f1e6af4159d662778a2e02f9e2b774c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:01:59.516681 1430316 start.go:364] duration metric: took 37.185µs to acquireMachinesLock for "pause-449722"
	I1002 22:01:59.516706 1430316 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:01:59.516718 1430316 fix.go:54] fixHost starting: 
	I1002 22:01:59.517020 1430316 cli_runner.go:164] Run: docker container inspect pause-449722 --format={{.State.Status}}
	I1002 22:01:59.535811 1430316 fix.go:112] recreateIfNeeded on pause-449722: state=Running err=<nil>
	W1002 22:01:59.535852 1430316 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:01:55.835544 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:01:55.835971 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:01:55.836022 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:01:55.836079 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:01:55.876766 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:01:55.876786 1418276 cri.go:89] found id: ""
	I1002 22:01:55.876794 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:01:55.876851 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:55.883511 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:01:55.883582 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:01:55.936608 1418276 cri.go:89] found id: ""
	I1002 22:01:55.936636 1418276 logs.go:282] 0 containers: []
	W1002 22:01:55.936644 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:01:55.936651 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:01:55.936710 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:01:55.974833 1418276 cri.go:89] found id: ""
	I1002 22:01:55.974857 1418276 logs.go:282] 0 containers: []
	W1002 22:01:55.974872 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:01:55.974879 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:01:55.974931 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:01:56.013594 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:01:56.013620 1418276 cri.go:89] found id: ""
	I1002 22:01:56.013629 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:01:56.013690 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:56.017977 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:01:56.018090 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:01:56.048834 1418276 cri.go:89] found id: ""
	I1002 22:01:56.048860 1418276 logs.go:282] 0 containers: []
	W1002 22:01:56.048874 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:01:56.048882 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:01:56.048941 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:01:56.078998 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:01:56.079024 1418276 cri.go:89] found id: ""
	I1002 22:01:56.079034 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:01:56.079124 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:56.083182 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:01:56.083262 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:01:56.111329 1418276 cri.go:89] found id: ""
	I1002 22:01:56.111370 1418276 logs.go:282] 0 containers: []
	W1002 22:01:56.111380 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:01:56.111387 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:01:56.111460 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:01:56.139377 1418276 cri.go:89] found id: ""
	I1002 22:01:56.139400 1418276 logs.go:282] 0 containers: []
	W1002 22:01:56.139409 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:01:56.139418 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:01:56.139435 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:01:56.256371 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:01:56.256413 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:01:56.279639 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:01:56.279673 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:01:56.390670 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:01:56.390695 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:01:56.390734 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:01:56.426347 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:01:56.426380 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:01:56.482382 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:01:56.482418 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:01:56.514457 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:01:56.514484 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:01:56.576640 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:01:56.576735 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:01:59.108176 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:01:59.108556 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:01:59.108595 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:01:59.108648 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:01:59.180518 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:01:59.180538 1418276 cri.go:89] found id: ""
	I1002 22:01:59.180546 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:01:59.180615 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:59.185494 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:01:59.185567 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:01:59.230782 1418276 cri.go:89] found id: ""
	I1002 22:01:59.230888 1418276 logs.go:282] 0 containers: []
	W1002 22:01:59.230896 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:01:59.230903 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:01:59.230955 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:01:59.262637 1418276 cri.go:89] found id: ""
	I1002 22:01:59.262720 1418276 logs.go:282] 0 containers: []
	W1002 22:01:59.262732 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:01:59.262740 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:01:59.262796 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:01:59.311788 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:01:59.311807 1418276 cri.go:89] found id: ""
	I1002 22:01:59.311816 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:01:59.311869 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:59.318663 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:01:59.318739 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:01:59.380315 1418276 cri.go:89] found id: ""
	I1002 22:01:59.380337 1418276 logs.go:282] 0 containers: []
	W1002 22:01:59.380344 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:01:59.380351 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:01:59.380406 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:01:59.428720 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:01:59.428743 1418276 cri.go:89] found id: ""
	I1002 22:01:59.428770 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:01:59.428845 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:01:59.433082 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:01:59.433163 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:01:59.471675 1418276 cri.go:89] found id: ""
	I1002 22:01:59.471702 1418276 logs.go:282] 0 containers: []
	W1002 22:01:59.471711 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:01:59.471718 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:01:59.471776 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:01:59.512588 1418276 cri.go:89] found id: ""
	I1002 22:01:59.512613 1418276 logs.go:282] 0 containers: []
	W1002 22:01:59.512621 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:01:59.512630 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:01:59.512641 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:01:59.556869 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:01:59.556898 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:01:59.696362 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:01:59.696441 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:01:59.717125 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:01:59.717152 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:01:59.539062 1430316 out.go:252] * Updating the running docker "pause-449722" container ...
	I1002 22:01:59.539144 1430316 machine.go:93] provisionDockerMachine start ...
	I1002 22:01:59.539251 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:01:59.566621 1430316 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:59.566958 1430316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34526 <nil> <nil>}
	I1002 22:01:59.566974 1430316 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:01:59.710201 1430316 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-449722
	
	I1002 22:01:59.710234 1430316 ubuntu.go:182] provisioning hostname "pause-449722"
	I1002 22:01:59.710306 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:01:59.737651 1430316 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:59.737952 1430316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34526 <nil> <nil>}
	I1002 22:01:59.737963 1430316 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-449722 && echo "pause-449722" | sudo tee /etc/hostname
	I1002 22:01:59.912204 1430316 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-449722
	
	I1002 22:01:59.912304 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:01:59.955802 1430316 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:59.956152 1430316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34526 <nil> <nil>}
	I1002 22:01:59.956175 1430316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-449722' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-449722/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-449722' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:02:00.314857 1430316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:02:00.314896 1430316 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:02:00.314928 1430316 ubuntu.go:190] setting up certificates
	I1002 22:02:00.314941 1430316 provision.go:84] configureAuth start
	I1002 22:02:00.315018 1430316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-449722
	I1002 22:02:00.382302 1430316 provision.go:143] copyHostCerts
	I1002 22:02:00.382386 1430316 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:02:00.382415 1430316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:02:00.382510 1430316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:02:00.382636 1430316 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:02:00.382647 1430316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:02:00.382677 1430316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:02:00.382744 1430316 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:02:00.382754 1430316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:02:00.382782 1430316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:02:00.382843 1430316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.pause-449722 san=[127.0.0.1 192.168.76.2 localhost minikube pause-449722]
	I1002 22:02:00.772862 1430316 provision.go:177] copyRemoteCerts
	I1002 22:02:00.772943 1430316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:02:00.772988 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:00.791689 1430316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:00.890351 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:02:00.909863 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 22:02:00.928872 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:02:00.947566 1430316 provision.go:87] duration metric: took 632.602548ms to configureAuth
	I1002 22:02:00.947590 1430316 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:02:00.947817 1430316 config.go:182] Loaded profile config "pause-449722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:02:00.947931 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:00.966778 1430316 main.go:141] libmachine: Using SSH client type: native
	I1002 22:02:00.967145 1430316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34526 <nil> <nil>}
	I1002 22:02:00.967162 1430316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1002 22:01:59.806679 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:01:59.806700 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:01:59.806713 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:01:59.863553 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:01:59.863630 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:01:59.942340 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:01:59.942381 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:01:59.978814 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:01:59.978901 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:02.750115 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:02:02.750584 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:02:02.750634 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:02.750697 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:02.777205 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:02.777228 1418276 cri.go:89] found id: ""
	I1002 22:02:02.777236 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:02:02.777293 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:02.781040 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:02.781211 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:02.807185 1418276 cri.go:89] found id: ""
	I1002 22:02:02.807208 1418276 logs.go:282] 0 containers: []
	W1002 22:02:02.807217 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:02:02.807223 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:02.807288 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:02.837424 1418276 cri.go:89] found id: ""
	I1002 22:02:02.837447 1418276 logs.go:282] 0 containers: []
	W1002 22:02:02.837456 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:02:02.837462 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:02.837520 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:02.864305 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:02.864327 1418276 cri.go:89] found id: ""
	I1002 22:02:02.864335 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:02:02.864391 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:02.868201 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:02.868275 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:02.893478 1418276 cri.go:89] found id: ""
	I1002 22:02:02.893503 1418276 logs.go:282] 0 containers: []
	W1002 22:02:02.893511 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:02:02.893518 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:02.893578 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:02.921217 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:02.921240 1418276 cri.go:89] found id: ""
	I1002 22:02:02.921249 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:02:02.921305 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:02.925077 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:02.925156 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:02.951022 1418276 cri.go:89] found id: ""
	I1002 22:02:02.951048 1418276 logs.go:282] 0 containers: []
	W1002 22:02:02.951057 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:02:02.951063 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:02.951123 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:02.977418 1418276 cri.go:89] found id: ""
	I1002 22:02:02.977442 1418276 logs.go:282] 0 containers: []
	W1002 22:02:02.977451 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:02:02.977459 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:02:02.977471 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:03.037540 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:02:03.037577 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:03.066331 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:03.066361 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:03.125544 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:02:03.125580 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:03.155950 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:03.155976 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:03.272687 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:03.272725 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:03.289120 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:03.289146 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:03.372734 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:03.372753 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:02:03.372766 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:06.348231 1430316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:02:06.348252 1430316 machine.go:96] duration metric: took 6.80909796s to provisionDockerMachine
	I1002 22:02:06.348264 1430316 start.go:293] postStartSetup for "pause-449722" (driver="docker")
	I1002 22:02:06.348274 1430316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:02:06.348342 1430316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:02:06.348399 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:06.382251 1430316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:06.491607 1430316 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:02:06.495944 1430316 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:02:06.495978 1430316 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:02:06.495989 1430316 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:02:06.496043 1430316 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:02:06.496122 1430316 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:02:06.496220 1430316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:02:06.505977 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:02:06.527636 1430316 start.go:296] duration metric: took 179.356563ms for postStartSetup
	I1002 22:02:06.527739 1430316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:02:06.527788 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:06.556678 1430316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:06.665234 1430316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:02:06.675769 1430316 fix.go:56] duration metric: took 7.159051636s for fixHost
	I1002 22:02:06.675795 1430316 start.go:83] releasing machines lock for "pause-449722", held for 7.159099791s
	I1002 22:02:06.675888 1430316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-449722
	I1002 22:02:06.701403 1430316 ssh_runner.go:195] Run: cat /version.json
	I1002 22:02:06.701423 1430316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:02:06.701456 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:06.701476 1430316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-449722
	I1002 22:02:06.726190 1430316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:06.738221 1430316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34526 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/pause-449722/id_rsa Username:docker}
	I1002 22:02:06.911418 1430316 ssh_runner.go:195] Run: systemctl --version
	I1002 22:02:06.918381 1430316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:02:06.960026 1430316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:02:06.964874 1430316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:02:06.964955 1430316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:02:06.973937 1430316 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:02:06.973958 1430316 start.go:495] detecting cgroup driver to use...
	I1002 22:02:06.973990 1430316 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:02:06.974079 1430316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:02:06.989483 1430316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:02:07.007299 1430316 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:02:07.007369 1430316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:02:07.023264 1430316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:02:07.036753 1430316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:02:07.175570 1430316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:02:07.317988 1430316 docker.go:234] disabling docker service ...
	I1002 22:02:07.318073 1430316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:02:07.333393 1430316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:02:07.346952 1430316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:02:07.485850 1430316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:02:07.623865 1430316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:02:07.637098 1430316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:02:07.652668 1430316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:02:07.652787 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.662985 1430316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:02:07.663084 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.672523 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.682196 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.691331 1430316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:02:07.700007 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.708985 1430316 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.717490 1430316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:02:07.726622 1430316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:02:07.734138 1430316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:02:07.741673 1430316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:02:07.879021 1430316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:02:08.062147 1430316 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:02:08.062309 1430316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:02:08.067240 1430316 start.go:563] Will wait 60s for crictl version
	I1002 22:02:08.067355 1430316 ssh_runner.go:195] Run: which crictl
	I1002 22:02:08.071763 1430316 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:02:08.105895 1430316 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:02:08.106123 1430316 ssh_runner.go:195] Run: crio --version
	I1002 22:02:08.139919 1430316 ssh_runner.go:195] Run: crio --version
	I1002 22:02:08.172406 1430316 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:02:08.175444 1430316 cli_runner.go:164] Run: docker network inspect pause-449722 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:02:08.192901 1430316 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:02:08.197186 1430316 kubeadm.go:883] updating cluster {Name:pause-449722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-449722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:02:08.197331 1430316 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:02:08.197385 1430316 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:02:08.231153 1430316 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:02:08.231177 1430316 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:02:08.231232 1430316 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:02:08.257729 1430316 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:02:08.257754 1430316 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:02:08.257763 1430316 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 22:02:08.257876 1430316 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-449722 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-449722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:02:08.257963 1430316 ssh_runner.go:195] Run: crio config
	I1002 22:02:08.331872 1430316 cni.go:84] Creating CNI manager for ""
	I1002 22:02:08.331895 1430316 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:02:08.331915 1430316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:02:08.331938 1430316 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-449722 NodeName:pause-449722 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:02:08.332071 1430316 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-449722"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:02:08.332151 1430316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:02:08.340230 1430316 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:02:08.340300 1430316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:02:08.347935 1430316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 22:02:08.362875 1430316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:02:08.375761 1430316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1002 22:02:08.388462 1430316 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:02:08.392195 1430316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:02:08.535797 1430316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:02:08.552602 1430316 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722 for IP: 192.168.76.2
	I1002 22:02:08.552625 1430316 certs.go:195] generating shared ca certs ...
	I1002 22:02:08.552641 1430316 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:08.552808 1430316 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:02:08.552856 1430316 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:02:08.552867 1430316 certs.go:257] generating profile certs ...
	I1002 22:02:08.552956 1430316 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/client.key
	I1002 22:02:08.553029 1430316 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/apiserver.key.c04b0f76
	I1002 22:02:08.553066 1430316 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/proxy-client.key
	I1002 22:02:08.553204 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:02:08.553245 1430316 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:02:08.553258 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:02:08.553282 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:02:08.553312 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:02:08.553349 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:02:08.553415 1430316 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:02:08.554126 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:02:08.574203 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:02:08.592080 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:02:08.610050 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:02:08.628216 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 22:02:08.648113 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:02:08.666903 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:02:08.686358 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:02:08.710673 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:02:08.728358 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:02:08.746391 1430316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:02:08.765205 1430316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:02:08.778018 1430316 ssh_runner.go:195] Run: openssl version
	I1002 22:02:08.784899 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:02:08.793588 1430316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:02:08.797444 1430316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:02:08.797538 1430316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:02:08.838801 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:02:08.847586 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:02:08.860703 1430316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:02:08.864818 1430316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:02:08.864887 1430316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:02:08.906222 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:02:08.914282 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:02:08.924325 1430316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:02:08.928830 1430316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:02:08.928909 1430316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:02:08.970240 1430316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:02:08.981781 1430316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:02:08.985652 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:02:09.028202 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:02:09.070223 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:02:09.116783 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:02:09.174885 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:02:05.914562 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:02:05.915036 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:02:05.915097 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:05.915168 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:05.941634 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:05.941656 1418276 cri.go:89] found id: ""
	I1002 22:02:05.941664 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:02:05.941721 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:05.945502 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:05.945611 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:05.971759 1418276 cri.go:89] found id: ""
	I1002 22:02:05.971783 1418276 logs.go:282] 0 containers: []
	W1002 22:02:05.971791 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:02:05.971798 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:05.971856 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:05.997166 1418276 cri.go:89] found id: ""
	I1002 22:02:05.997190 1418276 logs.go:282] 0 containers: []
	W1002 22:02:05.997199 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:02:05.997206 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:05.997263 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:06.028332 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:06.028359 1418276 cri.go:89] found id: ""
	I1002 22:02:06.028368 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:02:06.028437 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:06.032547 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:06.032623 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:06.059845 1418276 cri.go:89] found id: ""
	I1002 22:02:06.059869 1418276 logs.go:282] 0 containers: []
	W1002 22:02:06.059878 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:02:06.059895 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:06.059971 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:06.087587 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:06.087614 1418276 cri.go:89] found id: ""
	I1002 22:02:06.087623 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:02:06.087700 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:06.091674 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:06.091752 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:06.121132 1418276 cri.go:89] found id: ""
	I1002 22:02:06.121155 1418276 logs.go:282] 0 containers: []
	W1002 22:02:06.121170 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:02:06.121181 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:06.121240 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:06.161679 1418276 cri.go:89] found id: ""
	I1002 22:02:06.161706 1418276 logs.go:282] 0 containers: []
	W1002 22:02:06.161721 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:02:06.161732 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:06.161743 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:06.248534 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:06.248551 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:02:06.248564 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:06.285181 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:02:06.285213 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:06.363486 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:02:06.363580 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:06.412747 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:06.412771 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:06.483461 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:02:06.483497 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:06.551884 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:06.551913 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:06.675568 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:06.675601 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:09.198130 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:02:09.198546 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:02:09.198586 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:09.198657 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:09.254887 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:09.254912 1418276 cri.go:89] found id: ""
	I1002 22:02:09.254925 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:02:09.254986 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:09.260155 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:09.260240 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:09.316079 1418276 cri.go:89] found id: ""
	I1002 22:02:09.316101 1418276 logs.go:282] 0 containers: []
	W1002 22:02:09.316110 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:02:09.316116 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:09.316190 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:09.372277 1418276 cri.go:89] found id: ""
	I1002 22:02:09.372300 1418276 logs.go:282] 0 containers: []
	W1002 22:02:09.372308 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:02:09.372315 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:09.372381 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:09.430195 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:09.430228 1418276 cri.go:89] found id: ""
	I1002 22:02:09.430238 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:02:09.430316 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:09.437220 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:09.437305 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:09.484260 1418276 cri.go:89] found id: ""
	I1002 22:02:09.484379 1418276 logs.go:282] 0 containers: []
	W1002 22:02:09.484403 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:02:09.484447 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:09.484572 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:09.525085 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:09.525181 1418276 cri.go:89] found id: ""
	I1002 22:02:09.525225 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:02:09.525352 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:09.534859 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:09.535051 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:09.612578 1418276 cri.go:89] found id: ""
	I1002 22:02:09.612676 1418276 logs.go:282] 0 containers: []
	W1002 22:02:09.612700 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:02:09.612737 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:09.612824 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:09.664653 1418276 cri.go:89] found id: ""
	I1002 22:02:09.664742 1418276 logs.go:282] 0 containers: []
	W1002 22:02:09.664764 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:02:09.664802 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:02:09.664838 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:09.732665 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:02:09.732760 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:09.289619 1430316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:02:09.466638 1430316 kubeadm.go:400] StartCluster: {Name:pause-449722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-449722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:02:09.466757 1430316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:02:09.466833 1430316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:02:09.672361 1430316 cri.go:89] found id: "713a19e763d7a178a10c28fa8eeb30dfbd5bb39584d54662781fe7de08ed1508"
	I1002 22:02:09.672383 1430316 cri.go:89] found id: "d95817eb0545540da41b79aa299ac44bbdf51c90c190253f76134693bfc68110"
	I1002 22:02:09.672388 1430316 cri.go:89] found id: "a0fd68147fd14125fbf51aaf91faf2600d48d2658ab5eb4cddf8e56dfb570ba5"
	I1002 22:02:09.672392 1430316 cri.go:89] found id: "03c5cbc703625cd4a19470cd8ded95d449ef5493a4d596257f661e8675b3b0d2"
	I1002 22:02:09.672395 1430316 cri.go:89] found id: "7d066cc2d1e8459f5821b7e2796df942922d7d24410d2bd3519fab226daf4e37"
	I1002 22:02:09.672399 1430316 cri.go:89] found id: "eb6bcce816ee7f942f025c340187e101d9d7d3acc2bb3674588bea743c9a9cda"
	I1002 22:02:09.672412 1430316 cri.go:89] found id: "62db619520d973c09cde32d258e552287941877c1bec4ccbc118c0a24c051bdc"
	I1002 22:02:09.672415 1430316 cri.go:89] found id: "438fa54f734f4d94bf42128793810a6de0cd1720960c2ff8cd1d7300631fc51e"
	I1002 22:02:09.672418 1430316 cri.go:89] found id: "929ff0e798eeeb715e148515907bc2d9402bda656b33a794a6dc784f5fdcf30e"
	I1002 22:02:09.672426 1430316 cri.go:89] found id: "4d2e10670089995d9fcf18aa65a0b693d4b56c4e45f890dceea0a94112efda04"
	I1002 22:02:09.672430 1430316 cri.go:89] found id: "37a62fce49783ed1731b18e1bdd96cd63f9ac8414730a0deb92d32309b32d5f7"
	I1002 22:02:09.672433 1430316 cri.go:89] found id: ""
	I1002 22:02:09.672487 1430316 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:02:09.719346 1430316 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:02:09Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:02:09.719438 1430316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:02:09.744153 1430316 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:02:09.744181 1430316 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:02:09.744239 1430316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:02:09.765666 1430316 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:02:09.766375 1430316 kubeconfig.go:125] found "pause-449722" server: "https://192.168.76.2:8443"
	I1002 22:02:09.767211 1430316 kapi.go:59] client config for pause-449722: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/client.key", CAFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 22:02:09.767695 1430316 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 22:02:09.767722 1430316 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 22:02:09.767729 1430316 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 22:02:09.767734 1430316 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 22:02:09.767739 1430316 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 22:02:09.768063 1430316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:02:09.795277 1430316 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 22:02:09.795308 1430316 kubeadm.go:601] duration metric: took 51.122221ms to restartPrimaryControlPlane
	I1002 22:02:09.795318 1430316 kubeadm.go:402] duration metric: took 328.690318ms to StartCluster
	I1002 22:02:09.795333 1430316 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:09.795404 1430316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:02:09.796244 1430316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:02:09.796461 1430316 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:02:09.796831 1430316 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:02:09.796979 1430316 config.go:182] Loaded profile config "pause-449722": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:02:09.800001 1430316 out.go:179] * Enabled addons: 
	I1002 22:02:09.800102 1430316 out.go:179] * Verifying Kubernetes components...
	I1002 22:02:09.803152 1430316 addons.go:514] duration metric: took 6.301667ms for enable addons: enabled=[]
	I1002 22:02:09.803311 1430316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:02:10.213213 1430316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:02:10.238330 1430316 node_ready.go:35] waiting up to 6m0s for node "pause-449722" to be "Ready" ...
	I1002 22:02:13.802256 1430316 node_ready.go:49] node "pause-449722" is "Ready"
	I1002 22:02:13.802283 1430316 node_ready.go:38] duration metric: took 3.563906225s for node "pause-449722" to be "Ready" ...
	I1002 22:02:13.802297 1430316 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:02:13.802360 1430316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:02:13.823442 1430316 api_server.go:72] duration metric: took 4.026943583s to wait for apiserver process to appear ...
	I1002 22:02:13.823463 1430316 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:02:13.823483 1430316 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:13.838372 1430316 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 22:02:13.838395 1430316 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 22:02:09.845253 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:02:09.845296 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:09.893790 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:09.893819 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:09.993106 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:02:09.993144 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:10.070702 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:10.070735 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:10.218144 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:10.218188 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:10.264151 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:10.264188 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:10.402616 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:12.903243 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:02:12.903638 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:02:12.903687 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:12.903749 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:12.933563 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:12.933586 1418276 cri.go:89] found id: ""
	I1002 22:02:12.933595 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:02:12.933655 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:12.938071 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:12.938154 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:12.967157 1418276 cri.go:89] found id: ""
	I1002 22:02:12.967182 1418276 logs.go:282] 0 containers: []
	W1002 22:02:12.967191 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:02:12.967198 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:12.967259 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:13.007858 1418276 cri.go:89] found id: ""
	I1002 22:02:13.007886 1418276 logs.go:282] 0 containers: []
	W1002 22:02:13.007895 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:02:13.007902 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:13.007966 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:13.057253 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:13.057277 1418276 cri.go:89] found id: ""
	I1002 22:02:13.057285 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:02:13.057350 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:13.061385 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:13.061463 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:13.095021 1418276 cri.go:89] found id: ""
	I1002 22:02:13.095046 1418276 logs.go:282] 0 containers: []
	W1002 22:02:13.095055 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:02:13.095062 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:13.095125 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:13.135966 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:13.135990 1418276 cri.go:89] found id: ""
	I1002 22:02:13.135998 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:02:13.136058 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:13.142848 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:13.142930 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:13.190720 1418276 cri.go:89] found id: ""
	I1002 22:02:13.190746 1418276 logs.go:282] 0 containers: []
	W1002 22:02:13.190755 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:02:13.190762 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:13.190824 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:13.224778 1418276 cri.go:89] found id: ""
	I1002 22:02:13.224803 1418276 logs.go:282] 0 containers: []
	W1002 22:02:13.224811 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:02:13.224821 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:02:13.224833 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:13.274105 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:02:13.274144 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:13.365879 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:02:13.365915 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:13.419829 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:13.419859 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:13.508518 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:02:13.508564 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:13.585473 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:13.585504 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:13.732735 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:13.732772 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:13.767439 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:13.767468 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:13.898117 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:14.323858 1430316 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:14.333395 1430316 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 22:02:14.333429 1430316 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 22:02:14.823589 1430316 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:14.833077 1430316 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 22:02:14.835136 1430316 api_server.go:141] control plane version: v1.34.1
	I1002 22:02:14.835209 1430316 api_server.go:131] duration metric: took 1.011738692s to wait for apiserver health ...
	I1002 22:02:14.835233 1430316 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:02:14.838758 1430316 system_pods.go:59] 7 kube-system pods found
	I1002 22:02:14.838789 1430316 system_pods.go:61] "coredns-66bc5c9577-dzf4t" [f4e8dae8-7cc8-475e-8da2-b04e1cea5aed] Running
	I1002 22:02:14.838796 1430316 system_pods.go:61] "etcd-pause-449722" [8b49eca8-0b72-4049-bb6a-a4783e041caa] Running
	I1002 22:02:14.838802 1430316 system_pods.go:61] "kindnet-lbrbm" [c1ae6269-d077-4adb-9511-fe7466fd8e15] Running
	I1002 22:02:14.838806 1430316 system_pods.go:61] "kube-apiserver-pause-449722" [531bbc4e-f650-46c4-8fe3-699cc5a02c5b] Running
	I1002 22:02:14.838810 1430316 system_pods.go:61] "kube-controller-manager-pause-449722" [034e4449-3f02-4c78-9b0f-c2e83a75a180] Running
	I1002 22:02:14.838815 1430316 system_pods.go:61] "kube-proxy-mm5sk" [5dc0b155-d08a-4459-834e-2be1aabc4aa7] Running
	I1002 22:02:14.838821 1430316 system_pods.go:61] "kube-scheduler-pause-449722" [69112277-9eb1-4849-a2dd-d174886284fa] Running
	I1002 22:02:14.838826 1430316 system_pods.go:74] duration metric: took 3.576541ms to wait for pod list to return data ...
	I1002 22:02:14.838833 1430316 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:02:14.845098 1430316 default_sa.go:45] found service account: "default"
	I1002 22:02:14.845121 1430316 default_sa.go:55] duration metric: took 6.28168ms for default service account to be created ...
	I1002 22:02:14.845131 1430316 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:02:14.849498 1430316 system_pods.go:86] 7 kube-system pods found
	I1002 22:02:14.849573 1430316 system_pods.go:89] "coredns-66bc5c9577-dzf4t" [f4e8dae8-7cc8-475e-8da2-b04e1cea5aed] Running
	I1002 22:02:14.849596 1430316 system_pods.go:89] "etcd-pause-449722" [8b49eca8-0b72-4049-bb6a-a4783e041caa] Running
	I1002 22:02:14.849624 1430316 system_pods.go:89] "kindnet-lbrbm" [c1ae6269-d077-4adb-9511-fe7466fd8e15] Running
	I1002 22:02:14.849653 1430316 system_pods.go:89] "kube-apiserver-pause-449722" [531bbc4e-f650-46c4-8fe3-699cc5a02c5b] Running
	I1002 22:02:14.849671 1430316 system_pods.go:89] "kube-controller-manager-pause-449722" [034e4449-3f02-4c78-9b0f-c2e83a75a180] Running
	I1002 22:02:14.849691 1430316 system_pods.go:89] "kube-proxy-mm5sk" [5dc0b155-d08a-4459-834e-2be1aabc4aa7] Running
	I1002 22:02:14.849710 1430316 system_pods.go:89] "kube-scheduler-pause-449722" [69112277-9eb1-4849-a2dd-d174886284fa] Running
	I1002 22:02:14.849740 1430316 system_pods.go:126] duration metric: took 4.602204ms to wait for k8s-apps to be running ...
	I1002 22:02:14.849762 1430316 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:02:14.849860 1430316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:02:14.864274 1430316 system_svc.go:56] duration metric: took 14.502056ms WaitForService to wait for kubelet
	I1002 22:02:14.864348 1430316 kubeadm.go:586] duration metric: took 5.067854186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:02:14.864384 1430316 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:02:14.873882 1430316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:02:14.873910 1430316 node_conditions.go:123] node cpu capacity is 2
	I1002 22:02:14.873923 1430316 node_conditions.go:105] duration metric: took 9.519872ms to run NodePressure ...
	I1002 22:02:14.873955 1430316 start.go:241] waiting for startup goroutines ...
	I1002 22:02:14.873970 1430316 start.go:246] waiting for cluster config update ...
	I1002 22:02:14.873980 1430316 start.go:255] writing updated cluster config ...
	I1002 22:02:14.874367 1430316 ssh_runner.go:195] Run: rm -f paused
	I1002 22:02:14.879309 1430316 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:02:14.879978 1430316 kapi.go:59] client config for pause-449722: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/pause-449722/client.key", CAFile:"/home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 22:02:14.883735 1430316 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dzf4t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.895815 1430316 pod_ready.go:94] pod "coredns-66bc5c9577-dzf4t" is "Ready"
	I1002 22:02:14.895848 1430316 pod_ready.go:86] duration metric: took 12.08569ms for pod "coredns-66bc5c9577-dzf4t" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.898504 1430316 pod_ready.go:83] waiting for pod "etcd-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.902671 1430316 pod_ready.go:94] pod "etcd-pause-449722" is "Ready"
	I1002 22:02:14.902698 1430316 pod_ready.go:86] duration metric: took 4.169604ms for pod "etcd-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.904737 1430316 pod_ready.go:83] waiting for pod "kube-apiserver-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.910722 1430316 pod_ready.go:94] pod "kube-apiserver-pause-449722" is "Ready"
	I1002 22:02:14.910754 1430316 pod_ready.go:86] duration metric: took 5.989944ms for pod "kube-apiserver-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:14.912894 1430316 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:15.282726 1430316 pod_ready.go:94] pod "kube-controller-manager-pause-449722" is "Ready"
	I1002 22:02:15.282756 1430316 pod_ready.go:86] duration metric: took 369.837004ms for pod "kube-controller-manager-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:15.484381 1430316 pod_ready.go:83] waiting for pod "kube-proxy-mm5sk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:15.883391 1430316 pod_ready.go:94] pod "kube-proxy-mm5sk" is "Ready"
	I1002 22:02:15.883424 1430316 pod_ready.go:86] duration metric: took 399.016661ms for pod "kube-proxy-mm5sk" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:16.083213 1430316 pod_ready.go:83] waiting for pod "kube-scheduler-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:16.483510 1430316 pod_ready.go:94] pod "kube-scheduler-pause-449722" is "Ready"
	I1002 22:02:16.483584 1430316 pod_ready.go:86] duration metric: took 400.34599ms for pod "kube-scheduler-pause-449722" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:02:16.483610 1430316 pod_ready.go:40] duration metric: took 1.604254897s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:02:16.563674 1430316 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:02:16.568922 1430316 out.go:179] * Done! kubectl is now configured to use "pause-449722" cluster and "default" namespace by default
	I1002 22:02:16.398343 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:02:16.398869 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:02:16.398913 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:16.398967 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:16.434116 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:16.434138 1418276 cri.go:89] found id: ""
	I1002 22:02:16.434146 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:02:16.434201 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:16.437907 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:16.437990 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:16.464298 1418276 cri.go:89] found id: ""
	I1002 22:02:16.464322 1418276 logs.go:282] 0 containers: []
	W1002 22:02:16.464330 1418276 logs.go:284] No container was found matching "etcd"
	I1002 22:02:16.464337 1418276 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:16.464395 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:16.496950 1418276 cri.go:89] found id: ""
	I1002 22:02:16.497040 1418276 logs.go:282] 0 containers: []
	W1002 22:02:16.497057 1418276 logs.go:284] No container was found matching "coredns"
	I1002 22:02:16.497065 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:16.497128 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:16.541329 1418276 cri.go:89] found id: "624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:16.541367 1418276 cri.go:89] found id: ""
	I1002 22:02:16.541377 1418276 logs.go:282] 1 containers: [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f]
	I1002 22:02:16.541435 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:16.546398 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:16.546492 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:16.602624 1418276 cri.go:89] found id: ""
	I1002 22:02:16.602648 1418276 logs.go:282] 0 containers: []
	W1002 22:02:16.602657 1418276 logs.go:284] No container was found matching "kube-proxy"
	I1002 22:02:16.602664 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:16.602722 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:16.646650 1418276 cri.go:89] found id: "f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:16.646731 1418276 cri.go:89] found id: ""
	I1002 22:02:16.646743 1418276 logs.go:282] 1 containers: [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205]
	I1002 22:02:16.646802 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:16.650910 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:16.651055 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:16.684715 1418276 cri.go:89] found id: ""
	I1002 22:02:16.684737 1418276 logs.go:282] 0 containers: []
	W1002 22:02:16.684807 1418276 logs.go:284] No container was found matching "kindnet"
	I1002 22:02:16.684814 1418276 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:16.684873 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:16.730621 1418276 cri.go:89] found id: ""
	I1002 22:02:16.730666 1418276 logs.go:282] 0 containers: []
	W1002 22:02:16.730675 1418276 logs.go:284] No container was found matching "storage-provisioner"
	I1002 22:02:16.730684 1418276 logs.go:123] Gathering logs for kube-scheduler [624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f] ...
	I1002 22:02:16.730696 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 624a243523adae407d24f0eab12efabcda3bef925fcbfaf5d54a735089c7d74f"
	I1002 22:02:16.802471 1418276 logs.go:123] Gathering logs for kube-controller-manager [f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205] ...
	I1002 22:02:16.802574 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f001b0d9eacbfa7c370cd2e35014216ec991a9848f287924dc27c98a2a8ad205"
	I1002 22:02:16.837661 1418276 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:16.837686 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:16.916622 1418276 logs.go:123] Gathering logs for container status ...
	I1002 22:02:16.916715 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:16.950564 1418276 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:16.950591 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:17.080346 1418276 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:17.080425 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:17.107332 1418276 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:17.107360 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:17.204016 1418276 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:17.204080 1418276 logs.go:123] Gathering logs for kube-apiserver [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea] ...
	I1002 22:02:17.204107 1418276 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:19.742122 1418276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:02:19.742531 1418276 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1002 22:02:19.742580 1418276 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:19.742637 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:19.774098 1418276 cri.go:89] found id: "627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea"
	I1002 22:02:19.774118 1418276 cri.go:89] found id: ""
	I1002 22:02:19.774126 1418276 logs.go:282] 1 containers: [627c076c2d4462967529fea3f46ccc99d65e56bdcd2606803a6a036bcf29d6ea]
	I1002 22:02:19.774182 1418276 ssh_runner.go:195] Run: which crictl
	I1002 22:02:19.778146 1418276 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:19.778218 1418276 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	
	
	==> CRI-O <==
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.677593804Z" level=info msg="Started container" PID=2314 containerID=d95817eb0545540da41b79aa299ac44bbdf51c90c190253f76134693bfc68110 description=kube-system/kube-scheduler-pause-449722/kube-scheduler id=202e855e-aabd-4d81-93bc-7ac86927dbd6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fb1e45df8457161f853f53f5060cf588c73074f0321281287d240d881005114d
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.794226762Z" level=info msg="Created container 5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c: kube-system/etcd-pause-449722/etcd" id=ea8b7232-bcdc-4461-85a0-29ea816aeb60 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.798239266Z" level=info msg="Starting container: 5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c" id=6ed853c1-4b6a-49ae-bab9-65c7392ac0bb name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.800357134Z" level=info msg="Created container c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4: kube-system/kube-apiserver-pause-449722/kube-apiserver" id=35ba55e3-f10d-453f-a192-fb22a2ce048d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.802283769Z" level=info msg="Starting container: c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4" id=f0894e50-c2ce-4bfd-8589-a0b8cfdec04f name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.80432881Z" level=info msg="Started container" PID=2348 containerID=5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c description=kube-system/etcd-pause-449722/etcd id=6ed853c1-4b6a-49ae-bab9-65c7392ac0bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ba00690ed8465de0495380e9e7ffd47a39e38b64673b4bd9d00620773c160f7
	Oct 02 22:02:09 pause-449722 crio[2074]: time="2025-10-02T22:02:09.811694695Z" level=info msg="Started container" PID=2351 containerID=c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4 description=kube-system/kube-apiserver-pause-449722/kube-apiserver id=f0894e50-c2ce-4bfd-8589-a0b8cfdec04f name=/runtime.v1.RuntimeService/StartContainer sandboxID=29e0d57718d5232c213405e215d0dfbf2a6954c1b149dfd8678d8672f248acb4
	Oct 02 22:02:10 pause-449722 crio[2074]: time="2025-10-02T22:02:10.510586655Z" level=info msg="Created container 5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62: kube-system/kube-proxy-mm5sk/kube-proxy" id=3af6d83d-9bdb-4d12-a2aa-ff0823f5a37d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:02:10 pause-449722 crio[2074]: time="2025-10-02T22:02:10.511757702Z" level=info msg="Starting container: 5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62" id=942346e0-2a02-404d-9336-6017bdaa6934 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:02:10 pause-449722 crio[2074]: time="2025-10-02T22:02:10.515419057Z" level=info msg="Started container" PID=2366 containerID=5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62 description=kube-system/kube-proxy-mm5sk/kube-proxy id=942346e0-2a02-404d-9336-6017bdaa6934 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a5b4bedf7c9a5e7fb5b64abb989c241de7f326e1ae52a633173190524bd20b9
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.811292883Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.816208032Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.816382929Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.816465159Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.827420639Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.827577969Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.82765028Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.8338617Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.834004344Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.834272277Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.842418538Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.842465683Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.842488625Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.845847561Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:02:19 pause-449722 crio[2074]: time="2025-10-02T22:02:19.8458802Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	5f49cb654c126       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   13 seconds ago       Running             kube-proxy                1                   4a5b4bedf7c9a       kube-proxy-mm5sk                       kube-system
	5b238744636e8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago       Running             etcd                      1                   4ba00690ed846       etcd-pause-449722                      kube-system
	c6db0a9c33ac8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago       Running             kube-apiserver            1                   29e0d57718d52       kube-apiserver-pause-449722            kube-system
	713a19e763d7a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago       Running             kube-controller-manager   1                   96769c3c8d441       kube-controller-manager-pause-449722   kube-system
	d95817eb05455       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago       Running             kube-scheduler            1                   fb1e45df84571       kube-scheduler-pause-449722            kube-system
	a0fd68147fd14       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   13 seconds ago       Running             coredns                   1                   1544ed896d489       coredns-66bc5c9577-dzf4t               kube-system
	03c5cbc703625       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   13 seconds ago       Running             kindnet-cni               1                   d2ee30865175f       kindnet-lbrbm                          kube-system
	7d066cc2d1e84       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   26 seconds ago       Exited              coredns                   0                   1544ed896d489       coredns-66bc5c9577-dzf4t               kube-system
	eb6bcce816ee7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   d2ee30865175f       kindnet-lbrbm                          kube-system
	62db619520d97       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   4a5b4bedf7c9a       kube-proxy-mm5sk                       kube-system
	438fa54f734f4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   4ba00690ed846       etcd-pause-449722                      kube-system
	929ff0e798eee       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   fb1e45df84571       kube-scheduler-pause-449722            kube-system
	4d2e106700899       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   96769c3c8d441       kube-controller-manager-pause-449722   kube-system
	37a62fce49783       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   29e0d57718d52       kube-apiserver-pause-449722            kube-system
	
	
	==> coredns [7d066cc2d1e8459f5821b7e2796df942922d7d24410d2bd3519fab226daf4e37] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60658 - 53359 "HINFO IN 7641627972224923487.8635105253622644934. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025330309s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a0fd68147fd14125fbf51aaf91faf2600d48d2658ab5eb4cddf8e56dfb570ba5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37585 - 49754 "HINFO IN 5796983640354494215.2582439586429005221. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012420412s
	
	
	==> describe nodes <==
	Name:               pause-449722
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-449722
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=pause-449722
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_01_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:01:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-449722
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:02:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:02:10 +0000   Thu, 02 Oct 2025 22:01:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:02:10 +0000   Thu, 02 Oct 2025 22:01:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:02:10 +0000   Thu, 02 Oct 2025 22:01:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:02:10 +0000   Thu, 02 Oct 2025 22:01:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-449722
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ceaef681a1c43b0b63aace3800bed84
	  System UUID:                fe300808-6cad-40f8-b458-73bbea55969b
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-dzf4t                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     68s
	  kube-system                 etcd-pause-449722                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         75s
	  kube-system                 kindnet-lbrbm                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      68s
	  kube-system                 kube-apiserver-pause-449722             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-controller-manager-pause-449722    200m (10%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-proxy-mm5sk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-scheduler-pause-449722             100m (5%)     0 (0%)      0 (0%)           0 (0%)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 67s                kube-proxy       
	  Normal   Starting                 8s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node pause-449722 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node pause-449722 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s (x8 over 81s)  kubelet          Node pause-449722 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  73s                kubelet          Node pause-449722 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s                kubelet          Node pause-449722 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s                kubelet          Node pause-449722 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           69s                node-controller  Node pause-449722 event: Registered Node pause-449722 in Controller
	  Normal   NodeReady                27s                kubelet          Node pause-449722 status is now: NodeReady
	  Normal   RegisteredNode           6s                 node-controller  Node pause-449722 event: Registered Node pause-449722 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:34] overlayfs: idmapped layers are currently not supported
	[ +41.246514] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:35] overlayfs: idmapped layers are currently not supported
	[  +2.995481] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:36] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:37] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [438fa54f734f4d94bf42128793810a6de0cd1720960c2ff8cd1d7300631fc51e] <==
	{"level":"warn","ts":"2025-10-02T22:01:04.771278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:04.802558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:04.826792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:04.870443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:04.906445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:04.953834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:01:05.103912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48576","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T22:02:01.136229Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T22:02:01.136288Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-449722","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-02T22:02:01.137076Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T22:02:01.275405Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T22:02:01.275473Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T22:02:01.275494Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-02T22:02:01.275546Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T22:02:01.275602Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T22:02:01.275673Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T22:02:01.275691Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T22:02:01.275678Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-02T22:02:01.275732Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T22:02:01.275796Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T22:02:01.275807Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T22:02:01.279090Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-02T22:02:01.279173Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T22:02:01.279215Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-02T22:02:01.279273Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-449722","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [5b238744636e867bde6303568eebcc50301884bd9542bdd78a389d2ef5742d1c] <==
	{"level":"warn","ts":"2025-10-02T22:02:12.153449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.176774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.190831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.210379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.226769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.238631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.262222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.273608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.289953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.308775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.325029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.342550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.388237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.390236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.408272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.427070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.447560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.470622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.483204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.497031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.520232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.554862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.563057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.593258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:02:12.689896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34580","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:02:22 up  6:44,  0 user,  load average: 3.81, 3.34, 2.55
	Linux pause-449722 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03c5cbc703625cd4a19470cd8ded95d449ef5493a4d596257f661e8675b3b0d2] <==
	I1002 22:02:09.509052       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:02:09.528772       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 22:02:09.528918       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:02:09.528930       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:02:09.528947       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:02:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1002 22:02:09.807132       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:02:09.807227       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:02:09.807288       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 22:02:09.807366       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 22:02:09.807395       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:02:09.807404       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:02:09.807414       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:02:09.807524       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 22:02:14.107794       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:02:14.107835       1 metrics.go:72] Registering metrics
	I1002 22:02:14.107905       1 controller.go:711] "Syncing nftables rules"
	I1002 22:02:19.810919       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:02:19.810978       1 main.go:301] handling current node
	
	
	==> kindnet [eb6bcce816ee7f942f025c340187e101d9d7d3acc2bb3674588bea743c9a9cda] <==
	I1002 22:01:15.308871       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:01:15.309112       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 22:01:15.309233       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:01:15.309244       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:01:15.309257       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:01:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:01:15.508498       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:01:15.508583       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:01:15.508630       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:01:15.508785       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:01:45.509453       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:01:45.509455       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:01:45.509564       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 22:01:45.509570       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 22:01:47.109100       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:01:47.109260       1 metrics.go:72] Registering metrics
	I1002 22:01:47.109321       1 controller.go:711] "Syncing nftables rules"
	I1002 22:01:55.512602       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:01:55.512660       1 main.go:301] handling current node
	
	
	==> kube-apiserver [37a62fce49783ed1731b18e1bdd96cd63f9ac8414730a0deb92d32309b32d5f7] <==
	W1002 22:02:01.156662       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.156777       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.156867       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.156958       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157069       1 logging.go:55] [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157162       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157249       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157342       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157436       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.157524       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159362       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159480       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159512       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159586       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159644       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159695       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159732       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159795       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159850       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159880       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159704       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.160014       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159851       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.160094       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 22:02:01.159965       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c6db0a9c33ac806f45f3bba92498e130f1d57a8c68ff515b15d75aa161b6a1b4] <==
	I1002 22:02:13.416434       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1002 22:02:13.416453       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1002 22:02:13.957664       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 22:02:13.957779       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 22:02:13.957845       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:02:13.968773       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 22:02:13.968835       1 policy_source.go:240] refreshing policies
	I1002 22:02:13.975272       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:02:13.985715       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 22:02:13.986556       1 aggregator.go:171] initial CRD sync complete...
	I1002 22:02:13.986616       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 22:02:13.986646       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:02:13.986675       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:02:13.988013       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:02:13.988508       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 22:02:13.997889       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:02:14.018315       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 22:02:14.019222       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 22:02:14.044618       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 22:02:14.044832       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 22:02:14.049224       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 22:02:14.051572       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:02:14.051678       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:02:14.423283       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:02:14.814430       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-controller-manager [4d2e10670089995d9fcf18aa65a0b693d4b56c4e45f890dceea0a94112efda04] <==
	I1002 22:01:13.166287       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:01:13.169185       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 22:01:13.173679       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:01:13.182220       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:01:13.190576       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:01:13.190726       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:01:13.190753       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 22:01:13.190728       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:01:13.190802       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:01:13.190814       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 22:01:13.191888       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 22:01:13.192028       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 22:01:13.193235       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 22:01:13.193235       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:01:13.193255       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 22:01:13.193266       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 22:01:13.194151       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 22:01:13.193530       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 22:01:13.194668       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 22:01:13.194679       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 22:01:13.196273       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 22:01:13.199951       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:01:13.207155       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 22:01:13.210422       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 22:01:58.379426       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [713a19e763d7a178a10c28fa8eeb30dfbd5bb39584d54662781fe7de08ed1508] <==
	I1002 22:02:16.206239       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 22:02:16.221797       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 22:02:16.231004       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 22:02:16.231102       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 22:02:16.231235       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:02:16.232414       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 22:02:16.232463       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 22:02:16.232453       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 22:02:16.232613       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 22:02:16.232734       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-449722"
	I1002 22:02:16.232795       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 22:02:16.232523       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:02:16.232533       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:02:16.233812       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:02:16.232436       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 22:02:16.233898       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 22:02:16.233930       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 22:02:16.233084       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 22:02:16.238090       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 22:02:16.240385       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 22:02:16.241698       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:02:16.242114       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 22:02:16.245073       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 22:02:16.248363       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:02:16.252608       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	
	
	==> kube-proxy [5f49cb654c126c5c20d4d70599dd280fb765edd2fac7dc84267e1ed128077b62] <==
	I1002 22:02:12.283210       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:02:12.875929       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:02:14.079165       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:02:14.079286       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 22:02:14.079412       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:02:14.119638       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:02:14.119754       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:02:14.131191       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:02:14.131509       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:02:14.131530       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:02:14.132691       1 config.go:200] "Starting service config controller"
	I1002 22:02:14.132783       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:02:14.140944       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:02:14.141023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:02:14.142005       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:02:14.146275       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:02:14.142497       1 config.go:309] "Starting node config controller"
	I1002 22:02:14.146292       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:02:14.146298       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:02:14.233341       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:02:14.246663       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:02:14.246673       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [62db619520d973c09cde32d258e552287941877c1bec4ccbc118c0a24c051bdc] <==
	I1002 22:01:15.280233       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:01:15.416067       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:01:15.521829       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:01:15.521864       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 22:01:15.521939       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:01:15.539255       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:01:15.539308       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:01:15.542759       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:01:15.543063       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:01:15.543136       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:01:15.546559       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:01:15.546582       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:01:15.546856       1 config.go:200] "Starting service config controller"
	I1002 22:01:15.546880       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:01:15.547298       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:01:15.554727       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:01:15.554830       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:01:15.547721       1 config.go:309] "Starting node config controller"
	I1002 22:01:15.554919       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:01:15.554947       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:01:15.647136       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:01:15.647143       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [929ff0e798eeeb715e148515907bc2d9402bda656b33a794a6dc784f5fdcf30e] <==
	E1002 22:01:07.517452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 22:01:07.518517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 22:01:07.518896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 22:01:07.518987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 22:01:07.519081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 22:01:07.519137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 22:01:07.519198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 22:01:07.519288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 22:01:07.519358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 22:01:07.519482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 22:01:07.519515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 22:01:07.519552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 22:01:07.519595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 22:01:07.519634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 22:01:07.519687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 22:01:07.519766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 22:01:07.519806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 22:01:07.519861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1002 22:01:08.997477       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:02:01.123618       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 22:02:01.123651       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 22:02:01.123681       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 22:02:01.123706       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:02:01.123988       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 22:02:01.124011       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d95817eb0545540da41b79aa299ac44bbdf51c90c190253f76134693bfc68110] <==
	I1002 22:02:13.899815       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:02:13.904375       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:02:13.906877       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 22:02:13.906933       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:02:13.914259       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 22:02:13.930527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 22:02:13.930713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 22:02:13.930808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 22:02:13.930929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 22:02:13.931021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 22:02:13.931145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 22:02:13.931247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 22:02:13.931348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 22:02:13.931447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 22:02:13.931543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 22:02:13.931743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 22:02:13.931909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 22:02:13.932076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 22:02:13.932166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 22:02:13.932255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 22:02:13.932358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 22:02:13.932420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 22:02:13.932462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 22:02:13.934531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 22:02:15.115783       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.784945    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="a5114854e9985ad0bf13b7164e3eba60" pod="kube-system/kube-scheduler-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.806959    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-mm5sk\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="5dc0b155-d08a-4459-834e-2be1aabc4aa7" pod="kube-system/kube-proxy-mm5sk"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.831251    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-lbrbm\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="c1ae6269-d077-4adb-9511-fe7466fd8e15" pod="kube-system/kindnet-lbrbm"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.844431    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-dzf4t\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="f4e8dae8-7cc8-475e-8da2-b04e1cea5aed" pod="kube-system/coredns-66bc5c9577-dzf4t"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.868936    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="ba05bb8896f50a1cdb070df356a28cd5" pod="kube-system/etcd-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.876269    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="0744e62811fd4068ce33b544ef097fad" pod="kube-system/kube-controller-manager-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.886390    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="ba05bb8896f50a1cdb070df356a28cd5" pod="kube-system/etcd-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.915581    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="0744e62811fd4068ce33b544ef097fad" pod="kube-system/kube-controller-manager-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.934119    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="1fd5b59490a04240a195f6691e76ab60" pod="kube-system/kube-apiserver-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.941699    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-449722\" is forbidden: User \"system:node:pause-449722\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-449722' and this object" podUID="a5114854e9985ad0bf13b7164e3eba60" pod="kube-system/kube-scheduler-pause-449722"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.945776    1314 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         pods "kube-proxy-mm5sk" is forbidden: User "system:node:pause-449722" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-449722' and this object
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	Oct 02 22:02:13 pause-449722 kubelet[1314]:  > podUID="5dc0b155-d08a-4459-834e-2be1aabc4aa7" pod="kube-system/kube-proxy-mm5sk"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.948178    1314 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         pods "kindnet-lbrbm" is forbidden: User "system:node:pause-449722" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-449722' and this object
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	Oct 02 22:02:13 pause-449722 kubelet[1314]:  > podUID="c1ae6269-d077-4adb-9511-fe7466fd8e15" pod="kube-system/kindnet-lbrbm"
	Oct 02 22:02:13 pause-449722 kubelet[1314]: E1002 22:02:13.986293    1314 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         pods "coredns-66bc5c9577-dzf4t" is forbidden: User "system:node:pause-449722" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-449722' and this object
	Oct 02 22:02:13 pause-449722 kubelet[1314]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	Oct 02 22:02:13 pause-449722 kubelet[1314]:  > podUID="f4e8dae8-7cc8-475e-8da2-b04e1cea5aed" pod="kube-system/coredns-66bc5c9577-dzf4t"
	Oct 02 22:02:17 pause-449722 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:02:17 pause-449722 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:02:17 pause-449722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-449722 -n pause-449722
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-449722 -n pause-449722: exit status 2 (493.615568ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-449722 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-173127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-173127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (381.697902ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:14:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-173127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-173127 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-173127 describe deploy/metrics-server -n kube-system: exit status 1 (137.066071ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-173127 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-173127
helpers_test.go:243: (dbg) docker inspect old-k8s-version-173127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481",
	        "Created": "2025-10-02T22:13:49.766969826Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1451112,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:13:49.823241542Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/hostname",
	        "HostsPath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/hosts",
	        "LogPath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481-json.log",
	        "Name": "/old-k8s-version-173127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-173127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-173127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481",
	                "LowerDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-173127",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-173127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-173127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-173127",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-173127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b0ee3f5df204aa39184364a09abe0b7520663676e13ce45c95a73a63cfc3985",
	            "SandboxKey": "/var/run/docker/netns/9b0ee3f5df20",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34551"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34552"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34555"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34553"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34554"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-173127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:6c:bf:93:98:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6008a0e16210e1fdcf0e30a954f2bad61c0505195953a96ceceb44b75081115d",
	                    "EndpointID": "7c792902a6f9390f0b60dbd5d3227d312d77c19777633ee27c3ef59362558ec6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-173127",
	                        "a2aece711092"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-173127 -n old-k8s-version-173127
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-173127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-173127 logs -n 25: (1.544045732s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-198170 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo containerd config dump                                                                                                                                                                                                  │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo crio config                                                                                                                                                                                                             │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ delete  │ -p cilium-198170                                                                                                                                                                                                                              │ cilium-198170             │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │ 02 Oct 25 22:04 UTC │
	│ start   │ -p force-systemd-env-915858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-915858  │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ force-systemd-flag-292135 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-292135 │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	│ delete  │ -p force-systemd-flag-292135                                                                                                                                                                                                                  │ force-systemd-flag-292135 │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-247949    │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:11 UTC │
	│ delete  │ -p force-systemd-env-915858                                                                                                                                                                                                                   │ force-systemd-env-915858  │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p cert-options-280401 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-280401       │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ cert-options-280401 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-280401       │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ -p cert-options-280401 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-280401       │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ delete  │ -p cert-options-280401                                                                                                                                                                                                                        │ cert-options-280401       │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127    │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:14 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-247949    │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-173127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-173127    │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:14:30
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:14:30.492180 1453244 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:14:30.492273 1453244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:14:30.492327 1453244 out.go:374] Setting ErrFile to fd 2...
	I1002 22:14:30.492330 1453244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:14:30.492588 1453244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:14:30.493080 1453244 out.go:368] Setting JSON to false
	I1002 22:14:30.494089 1453244 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24996,"bootTime":1759418275,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:14:30.494147 1453244 start.go:140] virtualization:  
	I1002 22:14:30.502356 1453244 out.go:179] * [cert-expiration-247949] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:14:30.505860 1453244 notify.go:220] Checking for updates...
	I1002 22:14:30.508910 1453244 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:14:30.512387 1453244 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:14:30.515440 1453244 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:14:30.518406 1453244 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:14:30.521450 1453244 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:14:30.524427 1453244 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:14:30.528000 1453244 config.go:182] Loaded profile config "cert-expiration-247949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:14:30.528528 1453244 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:14:30.573179 1453244 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:14:30.573305 1453244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:14:30.690023 1453244 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:14:30.676214047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:14:30.690291 1453244 docker.go:318] overlay module found
	I1002 22:14:30.702194 1453244 out.go:179] * Using the docker driver based on existing profile
	I1002 22:14:30.705928 1453244 start.go:304] selected driver: docker
	I1002 22:14:30.705938 1453244 start.go:924] validating driver "docker" against &{Name:cert-expiration-247949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-247949 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:14:30.706271 1453244 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:14:30.706969 1453244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:14:30.809074 1453244 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:14:30.799843462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:14:30.809374 1453244 cni.go:84] Creating CNI manager for ""
	I1002 22:14:30.809435 1453244 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:14:30.809471 1453244 start.go:348] cluster config:
	{Name:cert-expiration-247949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-247949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1002 22:14:30.817156 1453244 out.go:179] * Starting "cert-expiration-247949" primary control-plane node in "cert-expiration-247949" cluster
	I1002 22:14:30.823896 1453244 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:14:30.827678 1453244 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:14:30.831827 1453244 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:14:30.831883 1453244 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:14:30.831892 1453244 cache.go:58] Caching tarball of preloaded images
	I1002 22:14:30.831991 1453244 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:14:30.832000 1453244 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:14:30.832119 1453244 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:14:30.832323 1453244 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/config.json ...
	I1002 22:14:30.862390 1453244 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:14:30.862401 1453244 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:14:30.862422 1453244 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:14:30.862445 1453244 start.go:360] acquireMachinesLock for cert-expiration-247949: {Name:mk2d86ac4c57797e7b17530e8bdce2bc6b8f9b2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:14:30.862497 1453244 start.go:364] duration metric: took 36.143µs to acquireMachinesLock for "cert-expiration-247949"
	I1002 22:14:30.862517 1453244 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:14:30.862527 1453244 fix.go:54] fixHost starting: 
	I1002 22:14:30.862786 1453244 cli_runner.go:164] Run: docker container inspect cert-expiration-247949 --format={{.State.Status}}
	I1002 22:14:30.890461 1453244 fix.go:112] recreateIfNeeded on cert-expiration-247949: state=Running err=<nil>
	W1002 22:14:30.890481 1453244 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:14:28.769041 1450722 addons.go:514] duration metric: took 1.233852515s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 22:14:28.927720 1450722 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-173127" context rescaled to 1 replicas
	W1002 22:14:30.432429 1450722 node_ready.go:57] node "old-k8s-version-173127" has "Ready":"False" status (will retry)
	W1002 22:14:32.437489 1450722 node_ready.go:57] node "old-k8s-version-173127" has "Ready":"False" status (will retry)
	I1002 22:14:30.894503 1453244 out.go:252] * Updating the running docker "cert-expiration-247949" container ...
	I1002 22:14:30.894542 1453244 machine.go:93] provisionDockerMachine start ...
	I1002 22:14:30.894627 1453244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:14:30.919287 1453244 main.go:141] libmachine: Using SSH client type: native
	I1002 22:14:30.919672 1453244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I1002 22:14:30.919680 1453244 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:14:31.088817 1453244 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-247949
	
	I1002 22:14:31.088831 1453244 ubuntu.go:182] provisioning hostname "cert-expiration-247949"
	I1002 22:14:31.088920 1453244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:14:31.126316 1453244 main.go:141] libmachine: Using SSH client type: native
	I1002 22:14:31.126623 1453244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I1002 22:14:31.126632 1453244 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-247949 && echo "cert-expiration-247949" | sudo tee /etc/hostname
	I1002 22:14:31.302753 1453244 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-247949
	
	I1002 22:14:31.302830 1453244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:14:31.326856 1453244 main.go:141] libmachine: Using SSH client type: native
	I1002 22:14:31.327168 1453244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I1002 22:14:31.327184 1453244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-247949' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-247949/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-247949' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:14:31.506815 1453244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:14:31.506847 1453244 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:14:31.506873 1453244 ubuntu.go:190] setting up certificates
	I1002 22:14:31.506882 1453244 provision.go:84] configureAuth start
	I1002 22:14:31.506948 1453244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-247949
	I1002 22:14:31.536767 1453244 provision.go:143] copyHostCerts
	I1002 22:14:31.536852 1453244 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:14:31.536868 1453244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:14:31.536985 1453244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:14:31.537120 1453244 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:14:31.537145 1453244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:14:31.537178 1453244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:14:31.537290 1453244 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:14:31.537295 1453244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:14:31.537348 1453244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:14:31.537546 1453244 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-247949 san=[127.0.0.1 192.168.76.2 cert-expiration-247949 localhost minikube]
	I1002 22:14:33.099970 1453244 provision.go:177] copyRemoteCerts
	I1002 22:14:33.100024 1453244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:14:33.100062 1453244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:14:33.119414 1453244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:14:33.221686 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:14:33.249671 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 22:14:33.271753 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:14:33.290797 1453244 provision.go:87] duration metric: took 1.783893057s to configureAuth
	I1002 22:14:33.290813 1453244 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:14:33.290997 1453244 config.go:182] Loaded profile config "cert-expiration-247949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:14:33.291098 1453244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:14:33.309070 1453244 main.go:141] libmachine: Using SSH client type: native
	I1002 22:14:33.309358 1453244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34541 <nil> <nil>}
	I1002 22:14:33.309371 1453244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1002 22:14:34.925186 1450722 node_ready.go:57] node "old-k8s-version-173127" has "Ready":"False" status (will retry)
	W1002 22:14:36.925886 1450722 node_ready.go:57] node "old-k8s-version-173127" has "Ready":"False" status (will retry)
	I1002 22:14:38.650014 1453244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:14:38.650087 1453244 machine.go:96] duration metric: took 7.755479103s to provisionDockerMachine
	I1002 22:14:38.650098 1453244 start.go:293] postStartSetup for "cert-expiration-247949" (driver="docker")
	I1002 22:14:38.650109 1453244 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:14:38.650169 1453244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:14:38.650222 1453244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:14:38.668614 1453244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:14:38.766965 1453244 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:14:38.770401 1453244 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:14:38.770417 1453244 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:14:38.770431 1453244 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:14:38.770487 1453244 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:14:38.770563 1453244 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:14:38.770660 1453244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:14:38.778740 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:14:38.796722 1453244 start.go:296] duration metric: took 146.61012ms for postStartSetup
	I1002 22:14:38.796803 1453244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:14:38.796840 1453244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:14:38.814424 1453244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:14:38.909343 1453244 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:14:38.915034 1453244 fix.go:56] duration metric: took 8.052505051s for fixHost
	I1002 22:14:38.915049 1453244 start.go:83] releasing machines lock for "cert-expiration-247949", held for 8.052544591s
	I1002 22:14:38.915140 1453244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-247949
	I1002 22:14:38.936477 1453244 ssh_runner.go:195] Run: cat /version.json
	I1002 22:14:38.936519 1453244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:14:38.936775 1453244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:14:38.936825 1453244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-247949
	I1002 22:14:38.962906 1453244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:14:38.971315 1453244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34541 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/cert-expiration-247949/id_rsa Username:docker}
	I1002 22:14:39.058246 1453244 ssh_runner.go:195] Run: systemctl --version
	I1002 22:14:39.152296 1453244 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:14:39.200249 1453244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:14:39.205198 1453244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:14:39.205259 1453244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:14:39.213349 1453244 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:14:39.213363 1453244 start.go:495] detecting cgroup driver to use...
	I1002 22:14:39.213393 1453244 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:14:39.213441 1453244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:14:39.230554 1453244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:14:39.244316 1453244 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:14:39.244368 1453244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:14:39.260330 1453244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:14:39.273829 1453244 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:14:39.415980 1453244 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:14:39.567518 1453244 docker.go:234] disabling docker service ...
	I1002 22:14:39.567578 1453244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:14:39.583764 1453244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:14:39.596838 1453244 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:14:39.731578 1453244 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:14:39.866716 1453244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:14:39.881274 1453244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:14:39.898863 1453244 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:14:39.898941 1453244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:14:39.909966 1453244 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:14:39.910071 1453244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:14:39.921214 1453244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:14:39.932031 1453244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:14:39.940874 1453244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:14:39.949999 1453244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:14:39.959166 1453244 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:14:39.968780 1453244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:14:39.978874 1453244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:14:39.987033 1453244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:14:39.994952 1453244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:14:40.161399 1453244 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1002 22:14:38.926899 1450722 node_ready.go:57] node "old-k8s-version-173127" has "Ready":"False" status (will retry)
	W1002 22:14:41.426601 1450722 node_ready.go:57] node "old-k8s-version-173127" has "Ready":"False" status (will retry)
	I1002 22:14:42.425120 1450722 node_ready.go:49] node "old-k8s-version-173127" is "Ready"
	I1002 22:14:42.425149 1450722 node_ready.go:38] duration metric: took 14.003066174s for node "old-k8s-version-173127" to be "Ready" ...
	I1002 22:14:42.425164 1450722 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:14:42.425224 1450722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:14:42.439147 1450722 api_server.go:72] duration metric: took 14.904370821s to wait for apiserver process to appear ...
	I1002 22:14:42.439174 1450722 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:14:42.439194 1450722 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:14:42.455010 1450722 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:14:42.456529 1450722 api_server.go:141] control plane version: v1.28.0
	I1002 22:14:42.456552 1450722 api_server.go:131] duration metric: took 17.371295ms to wait for apiserver health ...
	I1002 22:14:42.456560 1450722 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:14:42.466788 1450722 system_pods.go:59] 8 kube-system pods found
	I1002 22:14:42.466902 1450722 system_pods.go:61] "coredns-5dd5756b68-78sbd" [f75699fd-ea3a-48d9-8ed2-5b44e003cb58] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:14:42.466918 1450722 system_pods.go:61] "etcd-old-k8s-version-173127" [225299ab-3de1-4fd9-8ada-c55b33128e67] Running
	I1002 22:14:42.466925 1450722 system_pods.go:61] "kindnet-xtlhd" [e972a3fc-03ef-437a-a0d6-3f7337f3a2e7] Running
	I1002 22:14:42.466930 1450722 system_pods.go:61] "kube-apiserver-old-k8s-version-173127" [6594b352-cc57-4883-8a6c-85b0e820eac2] Running
	I1002 22:14:42.466936 1450722 system_pods.go:61] "kube-controller-manager-old-k8s-version-173127" [590d5ae8-5a2b-4db5-9da2-75793a01dd30] Running
	I1002 22:14:42.466954 1450722 system_pods.go:61] "kube-proxy-86prs" [b1d34de9-8156-4726-b410-78c5d3ca9beb] Running
	I1002 22:14:42.466965 1450722 system_pods.go:61] "kube-scheduler-old-k8s-version-173127" [4e4be49a-3b08-4188-9b69-9f0618db154c] Running
	I1002 22:14:42.466971 1450722 system_pods.go:61] "storage-provisioner" [4434574b-8c2c-4a2a-b3c5-60122fa77e43] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:14:42.466977 1450722 system_pods.go:74] duration metric: took 10.411541ms to wait for pod list to return data ...
	I1002 22:14:42.466986 1450722 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:14:42.469500 1450722 default_sa.go:45] found service account: "default"
	I1002 22:14:42.469524 1450722 default_sa.go:55] duration metric: took 2.529276ms for default service account to be created ...
	I1002 22:14:42.469534 1450722 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:14:42.474394 1450722 system_pods.go:86] 8 kube-system pods found
	I1002 22:14:42.474429 1450722 system_pods.go:89] "coredns-5dd5756b68-78sbd" [f75699fd-ea3a-48d9-8ed2-5b44e003cb58] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:14:42.474438 1450722 system_pods.go:89] "etcd-old-k8s-version-173127" [225299ab-3de1-4fd9-8ada-c55b33128e67] Running
	I1002 22:14:42.474444 1450722 system_pods.go:89] "kindnet-xtlhd" [e972a3fc-03ef-437a-a0d6-3f7337f3a2e7] Running
	I1002 22:14:42.474449 1450722 system_pods.go:89] "kube-apiserver-old-k8s-version-173127" [6594b352-cc57-4883-8a6c-85b0e820eac2] Running
	I1002 22:14:42.474478 1450722 system_pods.go:89] "kube-controller-manager-old-k8s-version-173127" [590d5ae8-5a2b-4db5-9da2-75793a01dd30] Running
	I1002 22:14:42.474483 1450722 system_pods.go:89] "kube-proxy-86prs" [b1d34de9-8156-4726-b410-78c5d3ca9beb] Running
	I1002 22:14:42.474495 1450722 system_pods.go:89] "kube-scheduler-old-k8s-version-173127" [4e4be49a-3b08-4188-9b69-9f0618db154c] Running
	I1002 22:14:42.474501 1450722 system_pods.go:89] "storage-provisioner" [4434574b-8c2c-4a2a-b3c5-60122fa77e43] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:14:42.474522 1450722 retry.go:31] will retry after 258.761671ms: missing components: kube-dns
	I1002 22:14:42.738384 1450722 system_pods.go:86] 8 kube-system pods found
	I1002 22:14:42.738493 1450722 system_pods.go:89] "coredns-5dd5756b68-78sbd" [f75699fd-ea3a-48d9-8ed2-5b44e003cb58] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:14:42.738531 1450722 system_pods.go:89] "etcd-old-k8s-version-173127" [225299ab-3de1-4fd9-8ada-c55b33128e67] Running
	I1002 22:14:42.738558 1450722 system_pods.go:89] "kindnet-xtlhd" [e972a3fc-03ef-437a-a0d6-3f7337f3a2e7] Running
	I1002 22:14:42.738578 1450722 system_pods.go:89] "kube-apiserver-old-k8s-version-173127" [6594b352-cc57-4883-8a6c-85b0e820eac2] Running
	I1002 22:14:42.738599 1450722 system_pods.go:89] "kube-controller-manager-old-k8s-version-173127" [590d5ae8-5a2b-4db5-9da2-75793a01dd30] Running
	I1002 22:14:42.738618 1450722 system_pods.go:89] "kube-proxy-86prs" [b1d34de9-8156-4726-b410-78c5d3ca9beb] Running
	I1002 22:14:42.738651 1450722 system_pods.go:89] "kube-scheduler-old-k8s-version-173127" [4e4be49a-3b08-4188-9b69-9f0618db154c] Running
	I1002 22:14:42.738671 1450722 system_pods.go:89] "storage-provisioner" [4434574b-8c2c-4a2a-b3c5-60122fa77e43] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:14:42.738702 1450722 retry.go:31] will retry after 293.753486ms: missing components: kube-dns
	I1002 22:14:43.037184 1450722 system_pods.go:86] 8 kube-system pods found
	I1002 22:14:43.037219 1450722 system_pods.go:89] "coredns-5dd5756b68-78sbd" [f75699fd-ea3a-48d9-8ed2-5b44e003cb58] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:14:43.037226 1450722 system_pods.go:89] "etcd-old-k8s-version-173127" [225299ab-3de1-4fd9-8ada-c55b33128e67] Running
	I1002 22:14:43.037233 1450722 system_pods.go:89] "kindnet-xtlhd" [e972a3fc-03ef-437a-a0d6-3f7337f3a2e7] Running
	I1002 22:14:43.037237 1450722 system_pods.go:89] "kube-apiserver-old-k8s-version-173127" [6594b352-cc57-4883-8a6c-85b0e820eac2] Running
	I1002 22:14:43.037242 1450722 system_pods.go:89] "kube-controller-manager-old-k8s-version-173127" [590d5ae8-5a2b-4db5-9da2-75793a01dd30] Running
	I1002 22:14:43.037246 1450722 system_pods.go:89] "kube-proxy-86prs" [b1d34de9-8156-4726-b410-78c5d3ca9beb] Running
	I1002 22:14:43.037250 1450722 system_pods.go:89] "kube-scheduler-old-k8s-version-173127" [4e4be49a-3b08-4188-9b69-9f0618db154c] Running
	I1002 22:14:43.037255 1450722 system_pods.go:89] "storage-provisioner" [4434574b-8c2c-4a2a-b3c5-60122fa77e43] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:14:43.037271 1450722 retry.go:31] will retry after 398.254479ms: missing components: kube-dns
	I1002 22:14:43.441651 1450722 system_pods.go:86] 8 kube-system pods found
	I1002 22:14:43.441690 1450722 system_pods.go:89] "coredns-5dd5756b68-78sbd" [f75699fd-ea3a-48d9-8ed2-5b44e003cb58] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:14:43.441698 1450722 system_pods.go:89] "etcd-old-k8s-version-173127" [225299ab-3de1-4fd9-8ada-c55b33128e67] Running
	I1002 22:14:43.441704 1450722 system_pods.go:89] "kindnet-xtlhd" [e972a3fc-03ef-437a-a0d6-3f7337f3a2e7] Running
	I1002 22:14:43.441708 1450722 system_pods.go:89] "kube-apiserver-old-k8s-version-173127" [6594b352-cc57-4883-8a6c-85b0e820eac2] Running
	I1002 22:14:43.441722 1450722 system_pods.go:89] "kube-controller-manager-old-k8s-version-173127" [590d5ae8-5a2b-4db5-9da2-75793a01dd30] Running
	I1002 22:14:43.441727 1450722 system_pods.go:89] "kube-proxy-86prs" [b1d34de9-8156-4726-b410-78c5d3ca9beb] Running
	I1002 22:14:43.441731 1450722 system_pods.go:89] "kube-scheduler-old-k8s-version-173127" [4e4be49a-3b08-4188-9b69-9f0618db154c] Running
	I1002 22:14:43.441741 1450722 system_pods.go:89] "storage-provisioner" [4434574b-8c2c-4a2a-b3c5-60122fa77e43] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:14:43.441758 1450722 retry.go:31] will retry after 536.918163ms: missing components: kube-dns
	I1002 22:14:43.983089 1450722 system_pods.go:86] 8 kube-system pods found
	I1002 22:14:43.983119 1450722 system_pods.go:89] "coredns-5dd5756b68-78sbd" [f75699fd-ea3a-48d9-8ed2-5b44e003cb58] Running
	I1002 22:14:43.983127 1450722 system_pods.go:89] "etcd-old-k8s-version-173127" [225299ab-3de1-4fd9-8ada-c55b33128e67] Running
	I1002 22:14:43.983131 1450722 system_pods.go:89] "kindnet-xtlhd" [e972a3fc-03ef-437a-a0d6-3f7337f3a2e7] Running
	I1002 22:14:43.983135 1450722 system_pods.go:89] "kube-apiserver-old-k8s-version-173127" [6594b352-cc57-4883-8a6c-85b0e820eac2] Running
	I1002 22:14:43.983141 1450722 system_pods.go:89] "kube-controller-manager-old-k8s-version-173127" [590d5ae8-5a2b-4db5-9da2-75793a01dd30] Running
	I1002 22:14:43.983146 1450722 system_pods.go:89] "kube-proxy-86prs" [b1d34de9-8156-4726-b410-78c5d3ca9beb] Running
	I1002 22:14:43.983153 1450722 system_pods.go:89] "kube-scheduler-old-k8s-version-173127" [4e4be49a-3b08-4188-9b69-9f0618db154c] Running
	I1002 22:14:43.983157 1450722 system_pods.go:89] "storage-provisioner" [4434574b-8c2c-4a2a-b3c5-60122fa77e43] Running
	I1002 22:14:43.983167 1450722 system_pods.go:126] duration metric: took 1.513618871s to wait for k8s-apps to be running ...
	I1002 22:14:43.983174 1450722 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:14:43.983234 1450722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:14:43.996497 1450722 system_svc.go:56] duration metric: took 13.314169ms WaitForService to wait for kubelet
	I1002 22:14:43.996525 1450722 kubeadm.go:586] duration metric: took 16.461756221s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:14:43.996545 1450722 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:14:43.999396 1450722 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:14:43.999427 1450722 node_conditions.go:123] node cpu capacity is 2
	I1002 22:14:43.999442 1450722 node_conditions.go:105] duration metric: took 2.891133ms to run NodePressure ...
	I1002 22:14:43.999456 1450722 start.go:241] waiting for startup goroutines ...
	I1002 22:14:43.999464 1450722 start.go:246] waiting for cluster config update ...
	I1002 22:14:43.999477 1450722 start.go:255] writing updated cluster config ...
	I1002 22:14:43.999800 1450722 ssh_runner.go:195] Run: rm -f paused
	I1002 22:14:44.007637 1450722 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:14:44.012329 1450722 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-78sbd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:44.017632 1450722 pod_ready.go:94] pod "coredns-5dd5756b68-78sbd" is "Ready"
	I1002 22:14:44.017656 1450722 pod_ready.go:86] duration metric: took 5.299968ms for pod "coredns-5dd5756b68-78sbd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:44.021012 1450722 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:44.026617 1450722 pod_ready.go:94] pod "etcd-old-k8s-version-173127" is "Ready"
	I1002 22:14:44.026649 1450722 pod_ready.go:86] duration metric: took 5.607203ms for pod "etcd-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:44.029843 1450722 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:44.035344 1450722 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-173127" is "Ready"
	I1002 22:14:44.035370 1450722 pod_ready.go:86] duration metric: took 5.492153ms for pod "kube-apiserver-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:44.038733 1450722 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:44.412337 1450722 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-173127" is "Ready"
	I1002 22:14:44.412374 1450722 pod_ready.go:86] duration metric: took 373.611634ms for pod "kube-controller-manager-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:44.613265 1450722 pod_ready.go:83] waiting for pod "kube-proxy-86prs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:45.019374 1450722 pod_ready.go:94] pod "kube-proxy-86prs" is "Ready"
	I1002 22:14:45.019401 1450722 pod_ready.go:86] duration metric: took 406.110293ms for pod "kube-proxy-86prs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:45.221129 1450722 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:45.612318 1450722 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-173127" is "Ready"
	I1002 22:14:45.612345 1450722 pod_ready.go:86] duration metric: took 391.180686ms for pod "kube-scheduler-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:14:45.612358 1450722 pod_ready.go:40] duration metric: took 1.604624241s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:14:45.689277 1450722 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1002 22:14:45.692699 1450722 out.go:203] 
	W1002 22:14:45.695655 1450722 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1002 22:14:45.698726 1450722 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1002 22:14:45.702594 1450722 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-173127" cluster and "default" namespace by default
	I1002 22:14:48.605830 1453244 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.44440909s)
	I1002 22:14:48.605845 1453244 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:14:48.605898 1453244 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:14:48.609890 1453244 start.go:563] Will wait 60s for crictl version
	I1002 22:14:48.609943 1453244 ssh_runner.go:195] Run: which crictl
	I1002 22:14:48.613590 1453244 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:14:48.639398 1453244 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:14:48.639480 1453244 ssh_runner.go:195] Run: crio --version
	I1002 22:14:48.670435 1453244 ssh_runner.go:195] Run: crio --version
	I1002 22:14:48.707179 1453244 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:14:48.710240 1453244 cli_runner.go:164] Run: docker network inspect cert-expiration-247949 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:14:48.726447 1453244 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:14:48.730684 1453244 kubeadm.go:883] updating cluster {Name:cert-expiration-247949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-247949 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:14:48.730798 1453244 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:14:48.730851 1453244 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:14:48.763911 1453244 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:14:48.763922 1453244 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:14:48.763979 1453244 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:14:48.789745 1453244 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:14:48.789756 1453244 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:14:48.789762 1453244 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 22:14:48.789864 1453244 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-247949 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-247949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:14:48.789953 1453244 ssh_runner.go:195] Run: crio config
	I1002 22:14:48.845913 1453244 cni.go:84] Creating CNI manager for ""
	I1002 22:14:48.845926 1453244 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:14:48.845939 1453244 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:14:48.845961 1453244 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-247949 NodeName:cert-expiration-247949 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:14:48.846144 1453244 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-247949"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:14:48.846213 1453244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:14:48.853918 1453244 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:14:48.853987 1453244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:14:48.861626 1453244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 22:14:48.874913 1453244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:14:48.889727 1453244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1002 22:14:48.905308 1453244 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:14:48.909286 1453244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:14:49.045397 1453244 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:14:49.060148 1453244 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949 for IP: 192.168.76.2
	I1002 22:14:49.060158 1453244 certs.go:195] generating shared ca certs ...
	I1002 22:14:49.060173 1453244 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:14:49.060309 1453244 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:14:49.060353 1453244 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:14:49.060359 1453244 certs.go:257] generating profile certs ...
	W1002 22:14:49.060474 1453244 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1002 22:14:49.060497 1453244 certs.go:624] cert expired /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.crt: expiration: 2025-10-02 22:14:09 +0000 UTC, now: 2025-10-02 22:14:49.060491918 +0000 UTC m=+18.634103746
	I1002 22:14:49.060642 1453244 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.key
	I1002 22:14:49.060659 1453244 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.crt with IP's: []
	I1002 22:14:49.786303 1453244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.crt ...
	I1002 22:14:49.786321 1453244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.crt: {Name:mk949fbce5691824432b6fbe669eb2de4be7c8ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:14:49.786486 1453244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.key ...
	I1002 22:14:49.786494 1453244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/client.key: {Name:mk6e9f2c97446e1e12f13ef33a2b7cf688f68790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1002 22:14:49.786667 1453244 out.go:285] ! Certificate apiserver.crt.d7aa8472 has expired. Generating a new one...
	I1002 22:14:49.786688 1453244 certs.go:624] cert expired /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt.d7aa8472: expiration: 2025-10-02 22:14:09 +0000 UTC, now: 2025-10-02 22:14:49.786683807 +0000 UTC m=+19.360295619
	I1002 22:14:49.786813 1453244 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key.d7aa8472
	I1002 22:14:49.786828 1453244 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt.d7aa8472 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 22:14:50.288314 1453244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt.d7aa8472 ...
	I1002 22:14:50.288329 1453244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt.d7aa8472: {Name:mk9a3a606123afb40827f98ee97ae7e6b48ab39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:14:50.288493 1453244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key.d7aa8472 ...
	I1002 22:14:50.288501 1453244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key.d7aa8472: {Name:mk2d8fd8aa8cc0ef28f8849586874d59b755bc51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:14:50.288565 1453244 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt.d7aa8472 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt
	I1002 22:14:50.288704 1453244 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key.d7aa8472 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key
	W1002 22:14:50.288875 1453244 out.go:285] ! Certificate proxy-client.crt has expired. Generating a new one...
	I1002 22:14:50.288895 1453244 certs.go:624] cert expired /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.crt: expiration: 2025-10-02 22:14:10 +0000 UTC, now: 2025-10-02 22:14:50.288891556 +0000 UTC m=+19.862503368
	I1002 22:14:50.288989 1453244 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.key
	I1002 22:14:50.289002 1453244 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.crt with IP's: []
	I1002 22:14:50.660605 1453244 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.crt ...
	I1002 22:14:50.660621 1453244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.crt: {Name:mk45c1889e4a25adecad200fe9310467215d1567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:14:50.660770 1453244 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.key ...
	I1002 22:14:50.660778 1453244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.key: {Name:mk93b91063bcb0bf6d118c05229c11f6a715679f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:14:50.660965 1453244 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:14:50.661001 1453244 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:14:50.661009 1453244 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:14:50.661031 1453244 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:14:50.661053 1453244 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:14:50.661076 1453244 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:14:50.661115 1453244 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:14:50.661671 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:14:50.709575 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:14:50.775719 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:14:50.819195 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:14:50.873941 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 22:14:50.913768 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:14:50.952327 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:14:50.997065 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/cert-expiration-247949/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 22:14:51.033268 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:14:51.063309 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:14:51.098608 1453244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:14:51.134807 1453244 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:14:51.156623 1453244 ssh_runner.go:195] Run: openssl version
	I1002 22:14:51.169266 1453244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:14:51.183099 1453244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:14:51.191578 1453244 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:14:51.191648 1453244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:14:51.276071 1453244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:14:51.288240 1453244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:14:51.303005 1453244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:14:51.308357 1453244 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:14:51.308421 1453244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:14:51.367599 1453244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:14:51.377520 1453244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:14:51.389417 1453244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:14:51.395015 1453244 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:14:51.395098 1453244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:14:51.448370 1453244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:14:51.459128 1453244 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:14:51.463938 1453244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:14:51.514748 1453244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:14:51.566047 1453244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:14:51.631096 1453244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:14:51.716868 1453244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:14:51.785546 1453244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:14:51.857746 1453244 kubeadm.go:400] StartCluster: {Name:cert-expiration-247949 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-247949 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:14:51.857830 1453244 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:14:51.857958 1453244 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:14:51.906102 1453244 cri.go:89] found id: "867a4b00b3ab8ea1599bf1c7ed0afa930280c3506896fec348fa5a35fd6cc11f"
	I1002 22:14:51.906116 1453244 cri.go:89] found id: "86b2c0e06228c0e5f8eb17514ec69b3d8da0f6594833715040ddafdaf0638946"
	I1002 22:14:51.906120 1453244 cri.go:89] found id: "0ac9be439ba55b4b2b8bc1ebd4e56d71bfdd30bc460f9ffd560443c288ee2031"
	I1002 22:14:51.906122 1453244 cri.go:89] found id: "734429d3f36c757e084a556d897cb95fcafbaf0e09701d9abf380f27c87015b1"
	I1002 22:14:51.906125 1453244 cri.go:89] found id: "1a85b54149a290ccc5abf63a85a86f12c77b0b30f162981989a1b8b4558874d7"
	I1002 22:14:51.906128 1453244 cri.go:89] found id: "1a6329c5b1e5635a18b5d956b2a63ad32697952cbfcc5e71e7f58bbbd6a161bf"
	I1002 22:14:51.906144 1453244 cri.go:89] found id: "acb0bbe678a5a3696a2c393d50b842185a6887a4ecd19c0ebcb452fe6ffa7ebd"
	I1002 22:14:51.906146 1453244 cri.go:89] found id: "72008d9240e4e5f3aa8c2c627a3ce1a65e1c12fd3f7422137e8b0b1fe8c632c2"
	I1002 22:14:51.906156 1453244 cri.go:89] found id: "3a08a3503d9e9c6b46bfc962dd887761b91676420651fe43a160b1ad70e2a555"
	I1002 22:14:51.906163 1453244 cri.go:89] found id: "c849d0ad3bc2ab957e2c4e401b7fceafff570d2c25676942d29536cd8e181fe0"
	I1002 22:14:51.906165 1453244 cri.go:89] found id: "2da0cf659809656cae1e861e0e900f9ba248006616fc1da5229fd4430fcea5af"
	I1002 22:14:51.906170 1453244 cri.go:89] found id: "d67e20b477710f5ac54cfdb66fad7daad3c945307793a209d2f12c4eba336ba6"
	I1002 22:14:51.906172 1453244 cri.go:89] found id: "c80b5990685babe08af85ce8146587108c791b7948e150724ddeee93ec7713f5"
	I1002 22:14:51.906174 1453244 cri.go:89] found id: "9d0844ce325ea681e741e1cea1f18f2311960df77488094fda011fffb9c61eca"
	I1002 22:14:51.906176 1453244 cri.go:89] found id: "01d4d1b55740154bd0f770309c48fe9223d9f5f30decfea6afa925cc089b28b5"
	I1002 22:14:51.906180 1453244 cri.go:89] found id: "eb7fa88fb05174b6ab5dcf01d6ec054516fc27980483bfa98e27a829d8d3771d"
	I1002 22:14:51.906182 1453244 cri.go:89] found id: ""
	I1002 22:14:51.906245 1453244 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:14:51.920791 1453244 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:14:51Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:14:51.920881 1453244 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:14:51.937982 1453244 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:14:51.937990 1453244 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:14:51.938085 1453244 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:14:51.958084 1453244 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:14:51.958903 1453244 kubeconfig.go:125] found "cert-expiration-247949" server: "https://192.168.76.2:8443"
	I1002 22:14:51.961214 1453244 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:14:51.979009 1453244 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 22:14:51.979041 1453244 kubeadm.go:601] duration metric: took 41.046182ms to restartPrimaryControlPlane
	I1002 22:14:51.979049 1453244 kubeadm.go:402] duration metric: took 121.311832ms to StartCluster
	I1002 22:14:51.979063 1453244 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:14:51.979139 1453244 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:14:51.980272 1453244 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:14:51.980556 1453244 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:14:51.980902 1453244 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:14:51.980969 1453244 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-247949"
	I1002 22:14:51.980982 1453244 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-247949"
	W1002 22:14:51.980987 1453244 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:14:51.981009 1453244 host.go:66] Checking if "cert-expiration-247949" exists ...
	I1002 22:14:51.981081 1453244 config.go:182] Loaded profile config "cert-expiration-247949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:14:51.981152 1453244 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-247949"
	I1002 22:14:51.981161 1453244 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-247949"
	I1002 22:14:51.981456 1453244 cli_runner.go:164] Run: docker container inspect cert-expiration-247949 --format={{.State.Status}}
	I1002 22:14:51.981531 1453244 cli_runner.go:164] Run: docker container inspect cert-expiration-247949 --format={{.State.Status}}
	I1002 22:14:51.994914 1453244 out.go:179] * Verifying Kubernetes components...
	I1002 22:14:51.998133 1453244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:14:52.011939 1453244 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Oct 02 22:14:42 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:42.57704965Z" level=info msg="Created container d77352577558d7e58747a011e85daf6e276ae6f63fbc8f381da9e3aeb25cb821: kube-system/coredns-5dd5756b68-78sbd/coredns" id=945d9b86-7dd3-4742-8d73-3b02b66681fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:14:42 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:42.577927993Z" level=info msg="Starting container: d77352577558d7e58747a011e85daf6e276ae6f63fbc8f381da9e3aeb25cb821" id=243bb6c2-bb78-493c-8679-fa327beff798 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:14:42 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:42.585087957Z" level=info msg="Started container" PID=1924 containerID=d77352577558d7e58747a011e85daf6e276ae6f63fbc8f381da9e3aeb25cb821 description=kube-system/coredns-5dd5756b68-78sbd/coredns id=243bb6c2-bb78-493c-8679-fa327beff798 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d320b840be298514f42b15db9e9a6d61756d6c82f36609e60e9856a3ca8ee28
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.214260062Z" level=info msg="Running pod sandbox: default/busybox/POD" id=93b2216c-2468-4f01-bfc7-db627f194689 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.214337147Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.221023856Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:eb12f786a85026b82bb770585b0ec008fbf45b7fa66f3c653e68a2ff07e58825 UID:18594e75-9c38-49b6-9ed4-84dddfb3c1a2 NetNS:/var/run/netns/55361fb9-e15d-462a-b221-0fcc1c24ace3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40028d4780}] Aliases:map[]}"
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.221070042Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.245744584Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:eb12f786a85026b82bb770585b0ec008fbf45b7fa66f3c653e68a2ff07e58825 UID:18594e75-9c38-49b6-9ed4-84dddfb3c1a2 NetNS:/var/run/netns/55361fb9-e15d-462a-b221-0fcc1c24ace3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40028d4780}] Aliases:map[]}"
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.245909184Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.249191669Z" level=info msg="Ran pod sandbox eb12f786a85026b82bb770585b0ec008fbf45b7fa66f3c653e68a2ff07e58825 with infra container: default/busybox/POD" id=93b2216c-2468-4f01-bfc7-db627f194689 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.25248131Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1fc22f64-48e7-4260-8c58-af508451f50d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.252866714Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1fc22f64-48e7-4260-8c58-af508451f50d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.2530019Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1fc22f64-48e7-4260-8c58-af508451f50d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.255460728Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a63cf769-03ab-4293-82c0-900d74d0fb80 name=/runtime.v1.ImageService/PullImage
	Oct 02 22:14:46 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:46.258295804Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 22:14:48 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:48.0968957Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a63cf769-03ab-4293-82c0-900d74d0fb80 name=/runtime.v1.ImageService/PullImage
	Oct 02 22:14:48 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:48.100129678Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=12ca3e39-2ee3-4bff-a3e2-6c9ce3cd7fd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:14:48 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:48.102433161Z" level=info msg="Creating container: default/busybox/busybox" id=6de180bf-338b-41ae-b406-3a5d41d6af0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:14:48 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:48.10329466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:14:48 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:48.108159132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:14:48 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:48.108803994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:14:48 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:48.12476377Z" level=info msg="Created container 723ba842dc18901a61ce7176e14db8adc78bf1a0b757b24bec0a20af816220eb: default/busybox/busybox" id=6de180bf-338b-41ae-b406-3a5d41d6af0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:14:48 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:48.126135569Z" level=info msg="Starting container: 723ba842dc18901a61ce7176e14db8adc78bf1a0b757b24bec0a20af816220eb" id=4906cc08-119d-4ac9-8225-dceba2e9d26b name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:14:48 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:48.127819625Z" level=info msg="Started container" PID=1975 containerID=723ba842dc18901a61ce7176e14db8adc78bf1a0b757b24bec0a20af816220eb description=default/busybox/busybox id=4906cc08-119d-4ac9-8225-dceba2e9d26b name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb12f786a85026b82bb770585b0ec008fbf45b7fa66f3c653e68a2ff07e58825
	Oct 02 22:14:54 old-k8s-version-173127 crio[839]: time="2025-10-02T22:14:54.13084226Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	723ba842dc189       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   eb12f786a8502       busybox                                          default
	d77352577558d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   2d320b840be29       coredns-5dd5756b68-78sbd                         kube-system
	a64093ca792d6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   00b21658a7a23       storage-provisioner                              kube-system
	b9fe4911feb41       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   fc37a5a2dedf9       kindnet-xtlhd                                    kube-system
	5ee47b2f899ab       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   bf1c3f566ce39       kube-proxy-86prs                                 kube-system
	9cbc07179f4b5       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   2fd08162278ec       etcd-old-k8s-version-173127                      kube-system
	5ddfdf1d83b36       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   c1a01a4cc1a6f       kube-apiserver-old-k8s-version-173127            kube-system
	10693f8a2bda3       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   7913d6b4b4cee       kube-controller-manager-old-k8s-version-173127   kube-system
	64e2f2d15b4fa       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   d55a61cf8b52a       kube-scheduler-old-k8s-version-173127            kube-system
	
	
	==> coredns [d77352577558d7e58747a011e85daf6e276ae6f63fbc8f381da9e3aeb25cb821] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47729 - 59439 "HINFO IN 2862021240071185444.616172061905595237. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022670968s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-173127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-173127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=old-k8s-version-173127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_14_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-173127
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:14:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:14:46 +0000   Thu, 02 Oct 2025 22:14:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:14:46 +0000   Thu, 02 Oct 2025 22:14:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:14:46 +0000   Thu, 02 Oct 2025 22:14:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:14:46 +0000   Thu, 02 Oct 2025 22:14:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-173127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 b22a5cb867844fd4854d7e22b8eb63c0
	  System UUID:                58645267-b6b4-4674-bdc1-8f78d84fc839
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-78sbd                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-173127                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-xtlhd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-173127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-173127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-86prs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-173127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-173127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-173127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-173127 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-173127 event: Registered Node old-k8s-version-173127 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-173127 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 21:37] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:38] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9cbc07179f4b54b2cbb2cd2cd276f5e68a0ca7df076839f34313b308cfd52d1f] <==
	{"level":"info","ts":"2025-10-02T22:14:08.43271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-02T22:14:08.434108Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-02T22:14:08.435486Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-02T22:14:08.43554Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-02T22:14:08.435381Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T22:14:08.436356Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T22:14:08.436446Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T22:14:09.202154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-02T22:14:09.202333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-02T22:14:09.202389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-02T22:14:09.202466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-02T22:14:09.2025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-02T22:14:09.202553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-02T22:14:09.202592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-02T22:14:09.205643Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-173127 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T22:14:09.208812Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T22:14:09.209934Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-02T22:14:09.210145Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T22:14:09.21041Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T22:14:09.212642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-02T22:14:09.214468Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T22:14:09.215525Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-02T22:14:09.215834Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T22:14:09.215972Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T22:14:09.218067Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 22:14:56 up  6:57,  0 user,  load average: 2.43, 1.63, 1.82
	Linux old-k8s-version-173127 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b9fe4911feb41964472b76f49b61ae75bf668fa70f3fe6634e78141919ddb5ef] <==
	I1002 22:14:31.707701       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:14:31.708161       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 22:14:31.708344       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:14:31.708403       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:14:31.708442       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:14:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:14:31.922593       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:14:31.922679       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:14:31.922711       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:14:31.927963       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 22:14:32.202155       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:14:32.202321       1 metrics.go:72] Registering metrics
	I1002 22:14:32.202436       1 controller.go:711] "Syncing nftables rules"
	I1002 22:14:41.927430       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:14:41.927481       1 main.go:301] handling current node
	I1002 22:14:51.923183       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:14:51.923223       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5ddfdf1d83b3636a72712931f983c10afdb169e437a477984c7b2795501f0779] <==
	I1002 22:14:11.891731       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 22:14:11.891979       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 22:14:11.892055       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:14:11.892098       1 aggregator.go:166] initial CRD sync complete...
	I1002 22:14:11.892126       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 22:14:11.892151       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:14:11.892175       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:14:11.893283       1 controller.go:624] quota admission added evaluator for: namespaces
	I1002 22:14:11.896384       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 22:14:12.100747       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:14:12.598845       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 22:14:12.606131       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 22:14:12.606155       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:14:13.254358       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:14:13.306750       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:14:13.423113       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 22:14:13.431761       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1002 22:14:13.433025       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 22:14:13.441901       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 22:14:13.800683       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 22:14:14.969157       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 22:14:15.002724       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 22:14:15.020606       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 22:14:27.181327       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1002 22:14:27.465366       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [10693f8a2bda370229649e441c926fd454e86f8e565c807fa6d4c16f88072b2a] <==
	I1002 22:14:27.116805       1 shared_informer.go:318] Caches are synced for cronjob
	I1002 22:14:27.150649       1 shared_informer.go:318] Caches are synced for stateful set
	I1002 22:14:27.155203       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1002 22:14:27.182497       1 shared_informer.go:318] Caches are synced for disruption
	I1002 22:14:27.213310       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-86prs"
	I1002 22:14:27.214099       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xtlhd"
	I1002 22:14:27.471709       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1002 22:14:27.545087       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 22:14:27.605241       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 22:14:27.605269       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 22:14:27.668057       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fxkkz"
	I1002 22:14:27.698874       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-78sbd"
	I1002 22:14:27.723151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="251.128129ms"
	I1002 22:14:27.745266       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.035065ms"
	I1002 22:14:27.745368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.937µs"
	I1002 22:14:28.491893       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1002 22:14:28.553320       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-fxkkz"
	I1002 22:14:28.568537       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.406405ms"
	I1002 22:14:28.590445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.861998ms"
	I1002 22:14:28.590584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.595µs"
	I1002 22:14:42.188698       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.882µs"
	I1002 22:14:42.212486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.45µs"
	I1002 22:14:43.479715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.710887ms"
	I1002 22:14:43.480848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.391µs"
	I1002 22:14:47.003139       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [5ee47b2f899abd529949e748c3ccd15b94edcd7f8626dd08ab39aa590c9cf27e] <==
	I1002 22:14:29.236391       1 server_others.go:69] "Using iptables proxy"
	I1002 22:14:29.252507       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1002 22:14:29.290731       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:14:29.295248       1 server_others.go:152] "Using iptables Proxier"
	I1002 22:14:29.295282       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 22:14:29.295289       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 22:14:29.295320       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 22:14:29.295518       1 server.go:846] "Version info" version="v1.28.0"
	I1002 22:14:29.295528       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:14:29.302302       1 config.go:188] "Starting service config controller"
	I1002 22:14:29.302319       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 22:14:29.302370       1 config.go:97] "Starting endpoint slice config controller"
	I1002 22:14:29.302375       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 22:14:29.303649       1 config.go:315] "Starting node config controller"
	I1002 22:14:29.303662       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 22:14:29.403184       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 22:14:29.403236       1 shared_informer.go:318] Caches are synced for service config
	I1002 22:14:29.403816       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [64e2f2d15b4fa3d5627e8f87abb739c67a05f3ba044ad4de6ae8865c44c505b2] <==
	W1002 22:14:11.858371       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 22:14:11.858584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 22:14:11.857821       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 22:14:11.858679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 22:14:11.858336       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 22:14:11.858756       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 22:14:11.859275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 22:14:11.859337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1002 22:14:12.679755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 22:14:12.679805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 22:14:12.726457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 22:14:12.726535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 22:14:12.797686       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 22:14:12.797721       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1002 22:14:12.906767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 22:14:12.906806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 22:14:12.927421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 22:14:12.927453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 22:14:12.929442       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 22:14:12.929476       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 22:14:12.966066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 22:14:12.966112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 22:14:13.023918       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 22:14:13.024040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1002 22:14:14.646141       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 02 22:14:27 old-k8s-version-173127 kubelet[1363]: I1002 22:14:27.304361    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e972a3fc-03ef-437a-a0d6-3f7337f3a2e7-lib-modules\") pod \"kindnet-xtlhd\" (UID: \"e972a3fc-03ef-437a-a0d6-3f7337f3a2e7\") " pod="kube-system/kindnet-xtlhd"
	Oct 02 22:14:27 old-k8s-version-173127 kubelet[1363]: I1002 22:14:27.304461    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmvwd\" (UniqueName: \"kubernetes.io/projected/e972a3fc-03ef-437a-a0d6-3f7337f3a2e7-kube-api-access-fmvwd\") pod \"kindnet-xtlhd\" (UID: \"e972a3fc-03ef-437a-a0d6-3f7337f3a2e7\") " pod="kube-system/kindnet-xtlhd"
	Oct 02 22:14:27 old-k8s-version-173127 kubelet[1363]: I1002 22:14:27.304573    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1d34de9-8156-4726-b410-78c5d3ca9beb-lib-modules\") pod \"kube-proxy-86prs\" (UID: \"b1d34de9-8156-4726-b410-78c5d3ca9beb\") " pod="kube-system/kube-proxy-86prs"
	Oct 02 22:14:27 old-k8s-version-173127 kubelet[1363]: I1002 22:14:27.304665    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e972a3fc-03ef-437a-a0d6-3f7337f3a2e7-cni-cfg\") pod \"kindnet-xtlhd\" (UID: \"e972a3fc-03ef-437a-a0d6-3f7337f3a2e7\") " pod="kube-system/kindnet-xtlhd"
	Oct 02 22:14:27 old-k8s-version-173127 kubelet[1363]: I1002 22:14:27.304759    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klkq\" (UniqueName: \"kubernetes.io/projected/b1d34de9-8156-4726-b410-78c5d3ca9beb-kube-api-access-4klkq\") pod \"kube-proxy-86prs\" (UID: \"b1d34de9-8156-4726-b410-78c5d3ca9beb\") " pod="kube-system/kube-proxy-86prs"
	Oct 02 22:14:27 old-k8s-version-173127 kubelet[1363]: I1002 22:14:27.304850    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e972a3fc-03ef-437a-a0d6-3f7337f3a2e7-xtables-lock\") pod \"kindnet-xtlhd\" (UID: \"e972a3fc-03ef-437a-a0d6-3f7337f3a2e7\") " pod="kube-system/kindnet-xtlhd"
	Oct 02 22:14:27 old-k8s-version-173127 kubelet[1363]: I1002 22:14:27.304948    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b1d34de9-8156-4726-b410-78c5d3ca9beb-kube-proxy\") pod \"kube-proxy-86prs\" (UID: \"b1d34de9-8156-4726-b410-78c5d3ca9beb\") " pod="kube-system/kube-proxy-86prs"
	Oct 02 22:14:28 old-k8s-version-173127 kubelet[1363]: E1002 22:14:28.408482    1363 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 22:14:28 old-k8s-version-173127 kubelet[1363]: E1002 22:14:28.408618    1363 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1d34de9-8156-4726-b410-78c5d3ca9beb-kube-proxy podName:b1d34de9-8156-4726-b410-78c5d3ca9beb nodeName:}" failed. No retries permitted until 2025-10-02 22:14:28.908588311 +0000 UTC m=+13.976329245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b1d34de9-8156-4726-b410-78c5d3ca9beb-kube-proxy") pod "kube-proxy-86prs" (UID: "b1d34de9-8156-4726-b410-78c5d3ca9beb") : failed to sync configmap cache: timed out waiting for the condition
	Oct 02 22:14:28 old-k8s-version-173127 kubelet[1363]: W1002 22:14:28.475107    1363 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/crio-fc37a5a2dedf9d6f54fc34f849b7a972cff9fce2e64717912d2b36cd85cd197c WatchSource:0}: Error finding container fc37a5a2dedf9d6f54fc34f849b7a972cff9fce2e64717912d2b36cd85cd197c: Status 404 returned error can't find the container with id fc37a5a2dedf9d6f54fc34f849b7a972cff9fce2e64717912d2b36cd85cd197c
	Oct 02 22:14:32 old-k8s-version-173127 kubelet[1363]: I1002 22:14:32.428460    1363 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-86prs" podStartSLOduration=5.428394797 podCreationTimestamp="2025-10-02 22:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:14:29.40520032 +0000 UTC m=+14.472941270" watchObservedRunningTime="2025-10-02 22:14:32.428394797 +0000 UTC m=+17.496135731"
	Oct 02 22:14:35 old-k8s-version-173127 kubelet[1363]: I1002 22:14:35.224388    1363 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-xtlhd" podStartSLOduration=5.18518346 podCreationTimestamp="2025-10-02 22:14:27 +0000 UTC" firstStartedPulling="2025-10-02 22:14:28.489502099 +0000 UTC m=+13.557243033" lastFinishedPulling="2025-10-02 22:14:31.528664389 +0000 UTC m=+16.596405322" observedRunningTime="2025-10-02 22:14:32.431080452 +0000 UTC m=+17.498821402" watchObservedRunningTime="2025-10-02 22:14:35.224345749 +0000 UTC m=+20.292086683"
	Oct 02 22:14:42 old-k8s-version-173127 kubelet[1363]: I1002 22:14:42.144828    1363 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 02 22:14:42 old-k8s-version-173127 kubelet[1363]: I1002 22:14:42.186167    1363 topology_manager.go:215] "Topology Admit Handler" podUID="f75699fd-ea3a-48d9-8ed2-5b44e003cb58" podNamespace="kube-system" podName="coredns-5dd5756b68-78sbd"
	Oct 02 22:14:42 old-k8s-version-173127 kubelet[1363]: I1002 22:14:42.190925    1363 topology_manager.go:215] "Topology Admit Handler" podUID="4434574b-8c2c-4a2a-b3c5-60122fa77e43" podNamespace="kube-system" podName="storage-provisioner"
	Oct 02 22:14:42 old-k8s-version-173127 kubelet[1363]: I1002 22:14:42.324554    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wcgb\" (UniqueName: \"kubernetes.io/projected/f75699fd-ea3a-48d9-8ed2-5b44e003cb58-kube-api-access-5wcgb\") pod \"coredns-5dd5756b68-78sbd\" (UID: \"f75699fd-ea3a-48d9-8ed2-5b44e003cb58\") " pod="kube-system/coredns-5dd5756b68-78sbd"
	Oct 02 22:14:42 old-k8s-version-173127 kubelet[1363]: I1002 22:14:42.324720    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtt9w\" (UniqueName: \"kubernetes.io/projected/4434574b-8c2c-4a2a-b3c5-60122fa77e43-kube-api-access-xtt9w\") pod \"storage-provisioner\" (UID: \"4434574b-8c2c-4a2a-b3c5-60122fa77e43\") " pod="kube-system/storage-provisioner"
	Oct 02 22:14:42 old-k8s-version-173127 kubelet[1363]: I1002 22:14:42.324754    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f75699fd-ea3a-48d9-8ed2-5b44e003cb58-config-volume\") pod \"coredns-5dd5756b68-78sbd\" (UID: \"f75699fd-ea3a-48d9-8ed2-5b44e003cb58\") " pod="kube-system/coredns-5dd5756b68-78sbd"
	Oct 02 22:14:42 old-k8s-version-173127 kubelet[1363]: I1002 22:14:42.324845    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4434574b-8c2c-4a2a-b3c5-60122fa77e43-tmp\") pod \"storage-provisioner\" (UID: \"4434574b-8c2c-4a2a-b3c5-60122fa77e43\") " pod="kube-system/storage-provisioner"
	Oct 02 22:14:42 old-k8s-version-173127 kubelet[1363]: W1002 22:14:42.502568    1363 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/crio-00b21658a7a23bcbbce698e226b73187b904747f814c908782af077fbad2abc6 WatchSource:0}: Error finding container 00b21658a7a23bcbbce698e226b73187b904747f814c908782af077fbad2abc6: Status 404 returned error can't find the container with id 00b21658a7a23bcbbce698e226b73187b904747f814c908782af077fbad2abc6
	Oct 02 22:14:42 old-k8s-version-173127 kubelet[1363]: W1002 22:14:42.534576    1363 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/crio-2d320b840be298514f42b15db9e9a6d61756d6c82f36609e60e9856a3ca8ee28 WatchSource:0}: Error finding container 2d320b840be298514f42b15db9e9a6d61756d6c82f36609e60e9856a3ca8ee28: Status 404 returned error can't find the container with id 2d320b840be298514f42b15db9e9a6d61756d6c82f36609e60e9856a3ca8ee28
	Oct 02 22:14:43 old-k8s-version-173127 kubelet[1363]: I1002 22:14:43.467355    1363 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-78sbd" podStartSLOduration=16.467311484 podCreationTimestamp="2025-10-02 22:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:14:43.46709525 +0000 UTC m=+28.534836192" watchObservedRunningTime="2025-10-02 22:14:43.467311484 +0000 UTC m=+28.535052426"
	Oct 02 22:14:43 old-k8s-version-173127 kubelet[1363]: I1002 22:14:43.467462    1363 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.467444486 podCreationTimestamp="2025-10-02 22:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:14:43.453238387 +0000 UTC m=+28.520979337" watchObservedRunningTime="2025-10-02 22:14:43.467444486 +0000 UTC m=+28.535185445"
	Oct 02 22:14:45 old-k8s-version-173127 kubelet[1363]: I1002 22:14:45.912687    1363 topology_manager.go:215] "Topology Admit Handler" podUID="18594e75-9c38-49b6-9ed4-84dddfb3c1a2" podNamespace="default" podName="busybox"
	Oct 02 22:14:46 old-k8s-version-173127 kubelet[1363]: I1002 22:14:46.046203    1363 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8l7\" (UniqueName: \"kubernetes.io/projected/18594e75-9c38-49b6-9ed4-84dddfb3c1a2-kube-api-access-kv8l7\") pod \"busybox\" (UID: \"18594e75-9c38-49b6-9ed4-84dddfb3c1a2\") " pod="default/busybox"
	
	
	==> storage-provisioner [a64093ca792d6d5d3c7e5429516831700beb119393c1449a629b297693888fa1] <==
	I1002 22:14:42.583907       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 22:14:42.614678       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:14:42.614810       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 22:14:42.626173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:14:42.629328       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-173127_6b93e888-ce93-4ef7-b1a7-14de9d7b1fac!
	I1002 22:14:42.629468       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3777a98b-9fef-490e-ac94-3a602207c6ab", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-173127_6b93e888-ce93-4ef7-b1a7-14de9d7b1fac became leader
	I1002 22:14:42.730232       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-173127_6b93e888-ce93-4ef7-b1a7-14de9d7b1fac!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-173127 -n old-k8s-version-173127
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-173127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-173127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-173127 --alsologtostderr -v=1: exit status 80 (1.71396043s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-173127 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:16:22.467502 1460196 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:16:22.467682 1460196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:16:22.467694 1460196 out.go:374] Setting ErrFile to fd 2...
	I1002 22:16:22.467700 1460196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:16:22.467970 1460196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:16:22.468231 1460196 out.go:368] Setting JSON to false
	I1002 22:16:22.468257 1460196 mustload.go:65] Loading cluster: old-k8s-version-173127
	I1002 22:16:22.468649 1460196 config.go:182] Loaded profile config "old-k8s-version-173127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 22:16:22.469123 1460196 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:16:22.487296 1460196 host.go:66] Checking if "old-k8s-version-173127" exists ...
	I1002 22:16:22.487677 1460196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:16:22.552715 1460196 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:16:22.543140936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:16:22.553447 1460196 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-173127 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 22:16:22.559084 1460196 out.go:179] * Pausing node old-k8s-version-173127 ... 
	I1002 22:16:22.562493 1460196 host.go:66] Checking if "old-k8s-version-173127" exists ...
	I1002 22:16:22.562843 1460196 ssh_runner.go:195] Run: systemctl --version
	I1002 22:16:22.562904 1460196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:16:22.580410 1460196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:16:22.676602 1460196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:16:22.691547 1460196 pause.go:51] kubelet running: true
	I1002 22:16:22.691625 1460196 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:16:22.917555 1460196 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:16:22.917653 1460196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:16:23.000211 1460196 cri.go:89] found id: "262c0c70277ce895db2bcada5bdf1a9907c5b11a07877c05a26b7ca693ce2690"
	I1002 22:16:23.000239 1460196 cri.go:89] found id: "4fd4342de784742d74c842c78faaa45d24380c2b83bdf44da664a2c16a2289d7"
	I1002 22:16:23.000244 1460196 cri.go:89] found id: "084caaaf7525a0c67194fc9d1407cd6fe2a876234d337900e84141524d04d741"
	I1002 22:16:23.000248 1460196 cri.go:89] found id: "63792d50544950ac3771e739f5e2e88e49ec0b557f03faae01aa93ce906075d5"
	I1002 22:16:23.000251 1460196 cri.go:89] found id: "85ae11f7046e7c4088564223c92310bf84ed1291c50dba819b3c63aa0fab5bab"
	I1002 22:16:23.000255 1460196 cri.go:89] found id: "127dfbf0ed811f4c5b76702b20dd8c06a78db529e7c9966d4050c03afa84f582"
	I1002 22:16:23.000258 1460196 cri.go:89] found id: "9248a46e17c68748b0f35e7cdc9ae68afe7904438ef405050db309d6ad51f90b"
	I1002 22:16:23.000261 1460196 cri.go:89] found id: "b311e5165e9b94b09ec1ae5755a3fa63d5356ede8a5b8e26c85b073fad8c5351"
	I1002 22:16:23.000264 1460196 cri.go:89] found id: "ce52286ca959aa1094c5360e368796a7e4a0f842b8a4a79f996f8d6b468e669c"
	I1002 22:16:23.000270 1460196 cri.go:89] found id: "e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d"
	I1002 22:16:23.000274 1460196 cri.go:89] found id: "9c9d715891fb1fe0c652d51c4da130ea0afd5634cc063f2c6ba51a847dbbf57f"
	I1002 22:16:23.000277 1460196 cri.go:89] found id: ""
	I1002 22:16:23.000329 1460196 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:16:23.022532 1460196 retry.go:31] will retry after 222.190916ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:16:23Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:16:23.244953 1460196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:16:23.258350 1460196 pause.go:51] kubelet running: false
	I1002 22:16:23.258415 1460196 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:16:23.422864 1460196 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:16:23.422989 1460196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:16:23.498333 1460196 cri.go:89] found id: "262c0c70277ce895db2bcada5bdf1a9907c5b11a07877c05a26b7ca693ce2690"
	I1002 22:16:23.498408 1460196 cri.go:89] found id: "4fd4342de784742d74c842c78faaa45d24380c2b83bdf44da664a2c16a2289d7"
	I1002 22:16:23.498420 1460196 cri.go:89] found id: "084caaaf7525a0c67194fc9d1407cd6fe2a876234d337900e84141524d04d741"
	I1002 22:16:23.498426 1460196 cri.go:89] found id: "63792d50544950ac3771e739f5e2e88e49ec0b557f03faae01aa93ce906075d5"
	I1002 22:16:23.498430 1460196 cri.go:89] found id: "85ae11f7046e7c4088564223c92310bf84ed1291c50dba819b3c63aa0fab5bab"
	I1002 22:16:23.498433 1460196 cri.go:89] found id: "127dfbf0ed811f4c5b76702b20dd8c06a78db529e7c9966d4050c03afa84f582"
	I1002 22:16:23.498437 1460196 cri.go:89] found id: "9248a46e17c68748b0f35e7cdc9ae68afe7904438ef405050db309d6ad51f90b"
	I1002 22:16:23.498441 1460196 cri.go:89] found id: "b311e5165e9b94b09ec1ae5755a3fa63d5356ede8a5b8e26c85b073fad8c5351"
	I1002 22:16:23.498445 1460196 cri.go:89] found id: "ce52286ca959aa1094c5360e368796a7e4a0f842b8a4a79f996f8d6b468e669c"
	I1002 22:16:23.498453 1460196 cri.go:89] found id: "e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d"
	I1002 22:16:23.498460 1460196 cri.go:89] found id: "9c9d715891fb1fe0c652d51c4da130ea0afd5634cc063f2c6ba51a847dbbf57f"
	I1002 22:16:23.498463 1460196 cri.go:89] found id: ""
	I1002 22:16:23.498517 1460196 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:16:23.510429 1460196 retry.go:31] will retry after 303.652009ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:16:23Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:16:23.815015 1460196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:16:23.828911 1460196 pause.go:51] kubelet running: false
	I1002 22:16:23.828981 1460196 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:16:24.012041 1460196 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:16:24.012125 1460196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:16:24.089051 1460196 cri.go:89] found id: "262c0c70277ce895db2bcada5bdf1a9907c5b11a07877c05a26b7ca693ce2690"
	I1002 22:16:24.089094 1460196 cri.go:89] found id: "4fd4342de784742d74c842c78faaa45d24380c2b83bdf44da664a2c16a2289d7"
	I1002 22:16:24.089101 1460196 cri.go:89] found id: "084caaaf7525a0c67194fc9d1407cd6fe2a876234d337900e84141524d04d741"
	I1002 22:16:24.089105 1460196 cri.go:89] found id: "63792d50544950ac3771e739f5e2e88e49ec0b557f03faae01aa93ce906075d5"
	I1002 22:16:24.089109 1460196 cri.go:89] found id: "85ae11f7046e7c4088564223c92310bf84ed1291c50dba819b3c63aa0fab5bab"
	I1002 22:16:24.089113 1460196 cri.go:89] found id: "127dfbf0ed811f4c5b76702b20dd8c06a78db529e7c9966d4050c03afa84f582"
	I1002 22:16:24.089117 1460196 cri.go:89] found id: "9248a46e17c68748b0f35e7cdc9ae68afe7904438ef405050db309d6ad51f90b"
	I1002 22:16:24.089120 1460196 cri.go:89] found id: "b311e5165e9b94b09ec1ae5755a3fa63d5356ede8a5b8e26c85b073fad8c5351"
	I1002 22:16:24.089123 1460196 cri.go:89] found id: "ce52286ca959aa1094c5360e368796a7e4a0f842b8a4a79f996f8d6b468e669c"
	I1002 22:16:24.089133 1460196 cri.go:89] found id: "e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d"
	I1002 22:16:24.089138 1460196 cri.go:89] found id: "9c9d715891fb1fe0c652d51c4da130ea0afd5634cc063f2c6ba51a847dbbf57f"
	I1002 22:16:24.089142 1460196 cri.go:89] found id: ""
	I1002 22:16:24.089203 1460196 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:16:24.104784 1460196 out.go:203] 
	W1002 22:16:24.108110 1460196 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:16:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:16:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 22:16:24.108134 1460196 out.go:285] * 
	* 
	W1002 22:16:24.117856 1460196 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:16:24.121192 1460196 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-173127 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-173127
helpers_test.go:243: (dbg) docker inspect old-k8s-version-173127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481",
	        "Created": "2025-10-02T22:13:49.766969826Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1456785,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:15:09.999386237Z",
	            "FinishedAt": "2025-10-02T22:15:09.222110959Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/hostname",
	        "HostsPath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/hosts",
	        "LogPath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481-json.log",
	        "Name": "/old-k8s-version-173127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-173127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-173127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481",
	                "LowerDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-173127",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-173127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-173127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-173127",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-173127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0a06c339277a808948508cef3e15a092d700f95e921430784789696309d1a0c0",
	            "SandboxKey": "/var/run/docker/netns/0a06c339277a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34561"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34562"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34565"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34563"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34564"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-173127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:22:80:54:1c:4d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6008a0e16210e1fdcf0e30a954f2bad61c0505195953a96ceceb44b75081115d",
	                    "EndpointID": "ce952d0b0e1a3a108d1d82da1dae3ec14482561c2cacf1372d8adc5a37c1aa6b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-173127",
	                        "a2aece711092"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-173127 -n old-k8s-version-173127
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-173127 -n old-k8s-version-173127: exit status 2 (408.375967ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-173127 logs -n 25
E1002 22:16:25.853212 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-173127 logs -n 25: (1.685426147s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-198170 sudo containerd config dump                                                                                                                                                                                                  │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo crio config                                                                                                                                                                                                             │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ delete  │ -p cilium-198170                                                                                                                                                                                                                              │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │ 02 Oct 25 22:04 UTC │
	│ start   │ -p force-systemd-env-915858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-915858     │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ force-systemd-flag-292135 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-292135    │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	│ delete  │ -p force-systemd-flag-292135                                                                                                                                                                                                                  │ force-systemd-flag-292135    │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:11 UTC │
	│ delete  │ -p force-systemd-env-915858                                                                                                                                                                                                                   │ force-systemd-env-915858     │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p cert-options-280401 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ cert-options-280401 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ -p cert-options-280401 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ delete  │ -p cert-options-280401                                                                                                                                                                                                                        │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:14 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-173127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │                     │
	│ stop    │ -p old-k8s-version-173127 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ delete  │ -p cert-expiration-247949                                                                                                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-173127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ image   │ old-k8s-version-173127 image list --format=json                                                                                                                                                                                               │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ pause   │ -p old-k8s-version-173127 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:15:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:15:09.732996 1456658 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:15:09.733150 1456658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:15:09.733160 1456658 out.go:374] Setting ErrFile to fd 2...
	I1002 22:15:09.733166 1456658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:15:09.733418 1456658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:15:09.733796 1456658 out.go:368] Setting JSON to false
	I1002 22:15:09.734695 1456658 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25035,"bootTime":1759418275,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:15:09.734769 1456658 start.go:140] virtualization:  
	I1002 22:15:09.737621 1456658 out.go:179] * [old-k8s-version-173127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:15:09.741518 1456658 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:15:09.741629 1456658 notify.go:220] Checking for updates...
	I1002 22:15:09.747512 1456658 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:15:09.750470 1456658 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:15:09.753420 1456658 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:15:09.756288 1456658 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:15:09.759163 1456658 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:15:09.762787 1456658 config.go:182] Loaded profile config "old-k8s-version-173127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 22:15:09.766395 1456658 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1002 22:15:09.769218 1456658 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:15:09.790324 1456658 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:15:09.790488 1456658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:15:09.852562 1456658 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 22:15:09.843037902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:15:09.852671 1456658 docker.go:318] overlay module found
	I1002 22:15:09.855879 1456658 out.go:179] * Using the docker driver based on existing profile
	I1002 22:15:09.858759 1456658 start.go:304] selected driver: docker
	I1002 22:15:09.858799 1456658 start.go:924] validating driver "docker" against &{Name:old-k8s-version-173127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-173127 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:15:09.858901 1456658 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:15:09.859657 1456658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:15:09.914697 1456658 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 22:15:09.903897057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:15:09.915046 1456658 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:15:09.915083 1456658 cni.go:84] Creating CNI manager for ""
	I1002 22:15:09.915146 1456658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:15:09.915191 1456658 start.go:348] cluster config:
	{Name:old-k8s-version-173127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-173127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:15:09.918497 1456658 out.go:179] * Starting "old-k8s-version-173127" primary control-plane node in "old-k8s-version-173127" cluster
	I1002 22:15:09.921444 1456658 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:15:09.924417 1456658 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:15:09.927316 1456658 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 22:15:09.927382 1456658 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1002 22:15:09.927413 1456658 cache.go:58] Caching tarball of preloaded images
	I1002 22:15:09.927413 1456658 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:15:09.927500 1456658 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:15:09.927510 1456658 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1002 22:15:09.927643 1456658 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/config.json ...
	I1002 22:15:09.947751 1456658 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:15:09.947770 1456658 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:15:09.947796 1456658 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:15:09.947826 1456658 start.go:360] acquireMachinesLock for old-k8s-version-173127: {Name:mk8e3605aaf356e5fa6d09b06d4a1c1e3fe0450d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:15:09.947886 1456658 start.go:364] duration metric: took 41.747µs to acquireMachinesLock for "old-k8s-version-173127"
	I1002 22:15:09.947909 1456658 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:15:09.947914 1456658 fix.go:54] fixHost starting: 
	I1002 22:15:09.948179 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:09.965092 1456658 fix.go:112] recreateIfNeeded on old-k8s-version-173127: state=Stopped err=<nil>
	W1002 22:15:09.965120 1456658 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:15:06.878636 1455463 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-230628:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.34406054s)
	I1002 22:15:06.878669 1455463 kic.go:203] duration metric: took 4.344208516s to extract preloaded images to volume ...
	W1002 22:15:06.878812 1455463 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 22:15:06.878933 1455463 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 22:15:06.929969 1455463 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-230628 --name default-k8s-diff-port-230628 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-230628 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-230628 --network default-k8s-diff-port-230628 --ip 192.168.76.2 --volume default-k8s-diff-port-230628:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 22:15:07.247574 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Running}}
	I1002 22:15:07.272126 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:07.295083 1455463 cli_runner.go:164] Run: docker exec default-k8s-diff-port-230628 stat /var/lib/dpkg/alternatives/iptables
	I1002 22:15:07.349052 1455463 oci.go:144] the created container "default-k8s-diff-port-230628" has a running status.
	I1002 22:15:07.349093 1455463 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa...
	I1002 22:15:07.607788 1455463 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 22:15:07.636776 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:07.656551 1455463 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 22:15:07.656570 1455463 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-230628 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 22:15:07.717251 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:07.747629 1455463 machine.go:93] provisionDockerMachine start ...
	I1002 22:15:07.747730 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:07.769412 1455463 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:07.769739 1455463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34556 <nil> <nil>}
	I1002 22:15:07.769749 1455463 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:15:07.770492 1455463 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:15:10.925909 1455463 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-230628
	
	I1002 22:15:10.925932 1455463 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-230628"
	I1002 22:15:10.925997 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:10.943867 1455463 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:10.944194 1455463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34556 <nil> <nil>}
	I1002 22:15:10.944211 1455463 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-230628 && echo "default-k8s-diff-port-230628" | sudo tee /etc/hostname
	I1002 22:15:11.088301 1455463 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-230628
	
	I1002 22:15:11.088381 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:11.106567 1455463 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:11.106886 1455463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34556 <nil> <nil>}
	I1002 22:15:11.106909 1455463 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-230628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-230628/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-230628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:15:11.242963 1455463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:15:11.243039 1455463 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:15:11.243074 1455463 ubuntu.go:190] setting up certificates
	I1002 22:15:11.243129 1455463 provision.go:84] configureAuth start
	I1002 22:15:11.243228 1455463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:15:11.260202 1455463 provision.go:143] copyHostCerts
	I1002 22:15:11.260273 1455463 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:15:11.260293 1455463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:15:11.260371 1455463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:15:11.260472 1455463 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:15:11.260477 1455463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:15:11.260503 1455463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:15:11.260565 1455463 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:15:11.260570 1455463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:15:11.260593 1455463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:15:11.260652 1455463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-230628 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-230628 localhost minikube]
	I1002 22:15:12.068250 1455463 provision.go:177] copyRemoteCerts
	I1002 22:15:12.068318 1455463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:15:12.068375 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.086255 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:12.182260 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:15:12.200488 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 22:15:12.218985 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:15:12.237061 1455463 provision.go:87] duration metric: took 993.889939ms to configureAuth
	I1002 22:15:12.237129 1455463 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:15:12.237344 1455463 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:15:12.237458 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.254167 1455463 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:12.254483 1455463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34556 <nil> <nil>}
	I1002 22:15:12.254502 1455463 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:15:12.497914 1455463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:15:12.497940 1455463 machine.go:96] duration metric: took 4.750290631s to provisionDockerMachine
	I1002 22:15:12.497951 1455463 client.go:171] duration metric: took 10.629091398s to LocalClient.Create
	I1002 22:15:12.497966 1455463 start.go:167] duration metric: took 10.62916374s to libmachine.API.Create "default-k8s-diff-port-230628"
	I1002 22:15:12.497975 1455463 start.go:293] postStartSetup for "default-k8s-diff-port-230628" (driver="docker")
	I1002 22:15:12.497989 1455463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:15:12.498084 1455463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:15:12.498137 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.515393 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:12.610343 1455463 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:15:12.613767 1455463 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:15:12.613795 1455463 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:15:12.613815 1455463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:15:12.613872 1455463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:15:12.613952 1455463 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:15:12.614090 1455463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:15:12.621929 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:15:12.640610 1455463 start.go:296] duration metric: took 142.615794ms for postStartSetup
	I1002 22:15:12.641001 1455463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:15:12.659652 1455463 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/config.json ...
	I1002 22:15:12.659959 1455463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:15:12.660012 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.680558 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:12.775490 1455463 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:15:12.780424 1455463 start.go:128] duration metric: took 10.915510732s to createHost
	I1002 22:15:12.780450 1455463 start.go:83] releasing machines lock for "default-k8s-diff-port-230628", held for 10.915642553s
	I1002 22:15:12.780526 1455463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:15:12.797414 1455463 ssh_runner.go:195] Run: cat /version.json
	I1002 22:15:12.797430 1455463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:15:12.797474 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.797496 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.822400 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:12.825524 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:13.009621 1455463 ssh_runner.go:195] Run: systemctl --version
	I1002 22:15:13.016613 1455463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:15:13.055252 1455463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:15:13.059824 1455463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:15:13.059900 1455463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:15:13.090824 1455463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 22:15:13.090846 1455463 start.go:495] detecting cgroup driver to use...
	I1002 22:15:13.090879 1455463 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:15:13.090938 1455463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:15:13.110298 1455463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:15:13.123531 1455463 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:15:13.123627 1455463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:15:13.145259 1455463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:15:13.164304 1455463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:15:13.279567 1455463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:15:13.432888 1455463 docker.go:234] disabling docker service ...
	I1002 22:15:13.432977 1455463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:15:13.455287 1455463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:15:13.474860 1455463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:15:13.625471 1455463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:15:13.786050 1455463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:15:13.799743 1455463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:15:13.814598 1455463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:15:13.814665 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.825426 1455463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:15:13.825496 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.850635 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.866532 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.878588 1455463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:15:13.892067 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.901540 1455463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.917613 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.928607 1455463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:15:13.937474 1455463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:15:13.946013 1455463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:14.098936 1455463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:15:14.269991 1455463 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:15:14.270141 1455463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:15:14.275092 1455463 start.go:563] Will wait 60s for crictl version
	I1002 22:15:14.275151 1455463 ssh_runner.go:195] Run: which crictl
	I1002 22:15:14.279338 1455463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:15:14.304676 1455463 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:15:14.304759 1455463 ssh_runner.go:195] Run: crio --version
	I1002 22:15:14.344170 1455463 ssh_runner.go:195] Run: crio --version
	I1002 22:15:14.384479 1455463 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:15:09.968429 1456658 out.go:252] * Restarting existing docker container for "old-k8s-version-173127" ...
	I1002 22:15:09.968523 1456658 cli_runner.go:164] Run: docker start old-k8s-version-173127
	I1002 22:15:10.246460 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:10.272716 1456658 kic.go:430] container "old-k8s-version-173127" state is running.
	I1002 22:15:10.273106 1456658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-173127
	I1002 22:15:10.299073 1456658 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/config.json ...
	I1002 22:15:10.299329 1456658 machine.go:93] provisionDockerMachine start ...
	I1002 22:15:10.299407 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:10.320652 1456658 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:10.320990 1456658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34561 <nil> <nil>}
	I1002 22:15:10.321006 1456658 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:15:10.321620 1456658 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55830->127.0.0.1:34561: read: connection reset by peer
	I1002 22:15:13.469932 1456658 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-173127
	
	I1002 22:15:13.469958 1456658 ubuntu.go:182] provisioning hostname "old-k8s-version-173127"
	I1002 22:15:13.470142 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:13.491674 1456658 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:13.491994 1456658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34561 <nil> <nil>}
	I1002 22:15:13.492083 1456658 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-173127 && echo "old-k8s-version-173127" | sudo tee /etc/hostname
	I1002 22:15:13.648844 1456658 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-173127
	
	I1002 22:15:13.648964 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:13.672044 1456658 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:13.672358 1456658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34561 <nil> <nil>}
	I1002 22:15:13.672381 1456658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-173127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-173127/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-173127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:15:13.827255 1456658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:15:13.827283 1456658 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:15:13.827339 1456658 ubuntu.go:190] setting up certificates
	I1002 22:15:13.827366 1456658 provision.go:84] configureAuth start
	I1002 22:15:13.827448 1456658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-173127
	I1002 22:15:13.844962 1456658 provision.go:143] copyHostCerts
	I1002 22:15:13.845024 1456658 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:15:13.845048 1456658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:15:13.845114 1456658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:15:13.845228 1456658 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:15:13.845239 1456658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:15:13.845262 1456658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:15:13.845330 1456658 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:15:13.845341 1456658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:15:13.845361 1456658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:15:13.845422 1456658 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-173127 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-173127]
	I1002 22:15:14.419429 1456658 provision.go:177] copyRemoteCerts
	I1002 22:15:14.419481 1456658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:15:14.419531 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:14.437746 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:14.538915 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:15:14.562194 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 22:15:14.582456 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:15:14.603161 1456658 provision.go:87] duration metric: took 775.779832ms to configureAuth
	I1002 22:15:14.603188 1456658 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:15:14.603388 1456658 config.go:182] Loaded profile config "old-k8s-version-173127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 22:15:14.603509 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:14.621220 1456658 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:14.621533 1456658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34561 <nil> <nil>}
	I1002 22:15:14.621558 1456658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:15:14.387233 1455463 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-230628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:15:14.414878 1455463 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:15:14.419232 1455463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:15:14.433076 1455463 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:15:14.433193 1455463 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:15:14.433245 1455463 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:15:14.480592 1455463 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:15:14.480613 1455463 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:15:14.480675 1455463 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:15:14.507148 1455463 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:15:14.507169 1455463 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:15:14.507176 1455463 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1002 22:15:14.507273 1455463 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-230628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:15:14.507351 1455463 ssh_runner.go:195] Run: crio config
	I1002 22:15:14.570454 1455463 cni.go:84] Creating CNI manager for ""
	I1002 22:15:14.570523 1455463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:15:14.570568 1455463 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:15:14.570613 1455463 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-230628 NodeName:default-k8s-diff-port-230628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:15:14.570803 1455463 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-230628"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:15:14.570905 1455463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:15:14.579516 1455463 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:15:14.579645 1455463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:15:14.588608 1455463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 22:15:14.604213 1455463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:15:14.623396 1455463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1002 22:15:14.643290 1455463 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:15:14.647832 1455463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:15:14.663268 1455463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:14.801476 1455463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:15:14.818655 1455463 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628 for IP: 192.168.76.2
	I1002 22:15:14.818677 1455463 certs.go:195] generating shared ca certs ...
	I1002 22:15:14.818694 1455463 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:14.818828 1455463 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:15:14.818875 1455463 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:15:14.818886 1455463 certs.go:257] generating profile certs ...
	I1002 22:15:14.818939 1455463 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.key
	I1002 22:15:14.818973 1455463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt with IP's: []
	I1002 22:15:15.049422 1455463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt ...
	I1002 22:15:15.049460 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: {Name:mkcd6a24c9ed73d5db5aef11a5b181c8bdb7fff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.049725 1455463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.key ...
	I1002 22:15:15.049738 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.key: {Name:mk69e8d972e013fd5f5d9119b3148edc028c6525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.049859 1455463 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595
	I1002 22:15:15.049874 1455463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt.2d20e595 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 22:15:15.444096 1455463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt.2d20e595 ...
	I1002 22:15:15.444130 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt.2d20e595: {Name:mkf57e1dd446764561fa06e2b021ffad070e7caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.444358 1455463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595 ...
	I1002 22:15:15.444375 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595: {Name:mka3a8ca78d14dabfb8b92a83293d5011419d567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.444505 1455463 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt.2d20e595 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt
	I1002 22:15:15.444620 1455463 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key
	I1002 22:15:15.444706 1455463 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key
	I1002 22:15:15.444739 1455463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt with IP's: []
	I1002 22:15:15.854660 1455463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt ...
	I1002 22:15:15.854716 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt: {Name:mkbf49ab5c3ae2be2607144f477252d578f89572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.854940 1455463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key ...
	I1002 22:15:15.854981 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key: {Name:mk6dd4edfb6b77de253c254c37adf457d0300324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.855221 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:15:15.855293 1455463 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:15:15.855331 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:15:15.855379 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:15:15.855432 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:15:15.855479 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:15:15.855568 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:15:15.856166 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:15:15.873716 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:15:15.891221 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:15:15.908664 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:15:15.930003 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 22:15:15.951582 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:15:15.980748 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:15:16.006070 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:15:16.027432 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:15:16.047890 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:15:16.069043 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:15:16.090342 1455463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:15:16.105376 1455463 ssh_runner.go:195] Run: openssl version
	I1002 22:15:16.112514 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:15:16.121282 1455463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:15:16.125084 1455463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:15:16.125202 1455463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:15:16.185914 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:15:16.196369 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:15:16.208967 1455463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:15:16.213112 1455463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:15:16.213208 1455463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:15:16.258221 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:15:16.266978 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:15:16.275632 1455463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:16.279693 1455463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:16.279803 1455463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:16.340819 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:15:16.349418 1455463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:15:16.353975 1455463 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 22:15:16.354080 1455463 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:15:16.354247 1455463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:15:16.354345 1455463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:15:16.406601 1455463 cri.go:89] found id: ""
	I1002 22:15:16.406757 1455463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:15:16.423876 1455463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:15:16.438067 1455463 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:15:16.438182 1455463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:15:16.466543 1455463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:15:16.466620 1455463 kubeadm.go:157] found existing configuration files:
	
	I1002 22:15:16.466711 1455463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1002 22:15:16.477683 1455463 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:15:16.477816 1455463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:15:16.492575 1455463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1002 22:15:16.503647 1455463 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:15:16.503710 1455463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:15:16.511975 1455463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1002 22:15:16.521853 1455463 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:15:16.521916 1455463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:15:16.530398 1455463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1002 22:15:16.540153 1455463 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:15:16.540216 1455463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:15:16.548606 1455463 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:15:16.596774 1455463 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:15:16.599265 1455463 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:15:16.646194 1455463 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:15:16.646272 1455463 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:15:16.646313 1455463 kubeadm.go:318] OS: Linux
	I1002 22:15:16.646366 1455463 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:15:16.646421 1455463 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:15:16.646474 1455463 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:15:16.646528 1455463 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:15:16.646582 1455463 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:15:16.646635 1455463 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:15:16.646686 1455463 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:15:16.646740 1455463 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:15:16.646798 1455463 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:15:16.733983 1455463 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:15:16.734149 1455463 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:15:16.734254 1455463 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:15:16.746959 1455463 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:15:15.001482 1456658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:15:15.001516 1456658 machine.go:96] duration metric: took 4.702169103s to provisionDockerMachine
	I1002 22:15:15.001529 1456658 start.go:293] postStartSetup for "old-k8s-version-173127" (driver="docker")
	I1002 22:15:15.001541 1456658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:15:15.001608 1456658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:15:15.001696 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:15.043856 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:15.155245 1456658 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:15:15.159168 1456658 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:15:15.159196 1456658 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:15:15.159215 1456658 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:15:15.159293 1456658 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:15:15.159377 1456658 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:15:15.159524 1456658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:15:15.168267 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:15:15.190661 1456658 start.go:296] duration metric: took 189.115006ms for postStartSetup
	I1002 22:15:15.190759 1456658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:15:15.190805 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:15.211353 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:15.304817 1456658 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:15:15.310595 1456658 fix.go:56] duration metric: took 5.362672893s for fixHost
	I1002 22:15:15.310626 1456658 start.go:83] releasing machines lock for "old-k8s-version-173127", held for 5.362729663s
	I1002 22:15:15.310710 1456658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-173127
	I1002 22:15:15.328902 1456658 ssh_runner.go:195] Run: cat /version.json
	I1002 22:15:15.328957 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:15.329205 1456658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:15:15.329266 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:15.355700 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:15.380990 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:15.573717 1456658 ssh_runner.go:195] Run: systemctl --version
	I1002 22:15:15.580422 1456658 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:15:15.628263 1456658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:15:15.633019 1456658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:15:15.633084 1456658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:15:15.641351 1456658 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:15:15.641374 1456658 start.go:495] detecting cgroup driver to use...
	I1002 22:15:15.641405 1456658 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:15:15.641454 1456658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:15:15.667836 1456658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:15:15.715390 1456658 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:15:15.715452 1456658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:15:15.734266 1456658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:15:15.760995 1456658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:15:15.913506 1456658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:15:16.063210 1456658 docker.go:234] disabling docker service ...
	I1002 22:15:16.063271 1456658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:15:16.081226 1456658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:15:16.096466 1456658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:15:16.243957 1456658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:15:16.398137 1456658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:15:16.414387 1456658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:15:16.433693 1456658 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 22:15:16.433752 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.448711 1456658 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:15:16.448779 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.462090 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.479047 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.489883 1456658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:15:16.501682 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.512488 1456658 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.522780 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.532621 1456658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:15:16.542899 1456658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:15:16.551698 1456658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:16.696543 1456658 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:15:16.865149 1456658 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:15:16.865215 1456658 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:15:16.869292 1456658 start.go:563] Will wait 60s for crictl version
	I1002 22:15:16.869351 1456658 ssh_runner.go:195] Run: which crictl
	I1002 22:15:16.873728 1456658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:15:16.917322 1456658 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:15:16.917489 1456658 ssh_runner.go:195] Run: crio --version
	I1002 22:15:16.963052 1456658 ssh_runner.go:195] Run: crio --version
	I1002 22:15:17.016048 1456658 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1002 22:15:17.019124 1456658 cli_runner.go:164] Run: docker network inspect old-k8s-version-173127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:15:17.043687 1456658 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 22:15:17.047984 1456658 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:15:17.059872 1456658 kubeadm.go:883] updating cluster {Name:old-k8s-version-173127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-173127 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:15:17.059989 1456658 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 22:15:17.060052 1456658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:15:17.094405 1456658 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:15:17.094429 1456658 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:15:17.094486 1456658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:15:17.127042 1456658 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:15:17.127067 1456658 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:15:17.127075 1456658 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1002 22:15:17.127200 1456658 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-173127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-173127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:15:17.127286 1456658 ssh_runner.go:195] Run: crio config
	I1002 22:15:17.215630 1456658 cni.go:84] Creating CNI manager for ""
	I1002 22:15:17.215654 1456658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:15:17.215673 1456658 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:15:17.215695 1456658 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-173127 NodeName:old-k8s-version-173127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:15:17.215844 1456658 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-173127"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:15:17.215915 1456658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1002 22:15:17.227126 1456658 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:15:17.227289 1456658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:15:17.239789 1456658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 22:15:17.253773 1456658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:15:17.266833 1456658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1002 22:15:17.279855 1456658 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:15:17.284090 1456658 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:15:17.293848 1456658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:17.441102 1456658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:15:17.457625 1456658 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127 for IP: 192.168.85.2
	I1002 22:15:17.457698 1456658 certs.go:195] generating shared ca certs ...
	I1002 22:15:17.457729 1456658 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:17.457910 1456658 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:15:17.458009 1456658 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:15:17.458051 1456658 certs.go:257] generating profile certs ...
	I1002 22:15:17.458187 1456658 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.key
	I1002 22:15:17.458310 1456658 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/apiserver.key.3d23cd3a
	I1002 22:15:17.458387 1456658 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/proxy-client.key
	I1002 22:15:17.458555 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:15:17.458622 1456658 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:15:17.458651 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:15:17.458711 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:15:17.458758 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:15:17.458812 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:15:17.458899 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:15:17.459742 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:15:17.515829 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:15:17.567290 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:15:17.632093 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:15:17.684686 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 22:15:17.735109 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:15:17.773372 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:15:17.797453 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 22:15:17.821531 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:15:17.841697 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:15:17.862435 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:15:17.882633 1456658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:15:17.907254 1456658 ssh_runner.go:195] Run: openssl version
	I1002 22:15:17.914345 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:15:17.923702 1456658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:15:17.928435 1456658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:15:17.928500 1456658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:15:17.982813 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:15:17.991614 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:15:18.003296 1456658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:18.009381 1456658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:18.009453 1456658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:18.057989 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:15:18.067698 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:15:18.077514 1456658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:15:18.082230 1456658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:15:18.082309 1456658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:15:18.126055 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:15:18.135258 1456658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:15:18.139930 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:15:18.181618 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:15:18.266805 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:15:18.371213 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:15:18.445762 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:15:18.691987 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:15:18.796815 1456658 kubeadm.go:400] StartCluster: {Name:old-k8s-version-173127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-173127 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:15:18.796909 1456658 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:15:18.796989 1456658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:15:18.962574 1456658 cri.go:89] found id: "127dfbf0ed811f4c5b76702b20dd8c06a78db529e7c9966d4050c03afa84f582"
	I1002 22:15:18.962598 1456658 cri.go:89] found id: "9248a46e17c68748b0f35e7cdc9ae68afe7904438ef405050db309d6ad51f90b"
	I1002 22:15:18.962603 1456658 cri.go:89] found id: "b311e5165e9b94b09ec1ae5755a3fa63d5356ede8a5b8e26c85b073fad8c5351"
	I1002 22:15:18.962609 1456658 cri.go:89] found id: "ce52286ca959aa1094c5360e368796a7e4a0f842b8a4a79f996f8d6b468e669c"
	I1002 22:15:18.962612 1456658 cri.go:89] found id: ""
	I1002 22:15:18.962661 1456658 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:15:19.011496 1456658 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:15:19Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:15:19.011615 1456658 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:15:19.036559 1456658 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:15:19.036579 1456658 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:15:19.036639 1456658 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:15:19.068243 1456658 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:15:19.068647 1456658 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-173127" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:15:19.068775 1456658 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-173127" cluster setting kubeconfig missing "old-k8s-version-173127" context setting]
	I1002 22:15:19.069067 1456658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:19.070540 1456658 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:15:19.092524 1456658 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 22:15:19.092557 1456658 kubeadm.go:601] duration metric: took 55.972415ms to restartPrimaryControlPlane
	I1002 22:15:19.092567 1456658 kubeadm.go:402] duration metric: took 295.761664ms to StartCluster
	I1002 22:15:19.092582 1456658 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:19.092644 1456658 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:15:19.093259 1456658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:19.093478 1456658 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:15:19.093775 1456658 config.go:182] Loaded profile config "old-k8s-version-173127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 22:15:19.093822 1456658 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:15:19.093889 1456658 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-173127"
	I1002 22:15:19.093910 1456658 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-173127"
	W1002 22:15:19.093983 1456658 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:15:19.094008 1456658 host.go:66] Checking if "old-k8s-version-173127" exists ...
	I1002 22:15:19.093939 1456658 addons.go:69] Setting dashboard=true in profile "old-k8s-version-173127"
	I1002 22:15:19.094056 1456658 addons.go:238] Setting addon dashboard=true in "old-k8s-version-173127"
	W1002 22:15:19.094062 1456658 addons.go:247] addon dashboard should already be in state true
	I1002 22:15:19.094086 1456658 host.go:66] Checking if "old-k8s-version-173127" exists ...
	I1002 22:15:19.094681 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:19.093947 1456658 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-173127"
	I1002 22:15:19.095168 1456658 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-173127"
	I1002 22:15:19.095408 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:19.095799 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:19.099500 1456658 out.go:179] * Verifying Kubernetes components...
	I1002 22:15:19.106123 1456658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:19.153626 1456658 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:15:19.153805 1456658 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:15:19.154856 1456658 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-173127"
	W1002 22:15:19.154876 1456658 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:15:19.154899 1456658 host.go:66] Checking if "old-k8s-version-173127" exists ...
	I1002 22:15:19.155312 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:19.157571 1456658 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:15:19.157621 1456658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:15:19.157686 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:19.160443 1456658 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 22:15:19.163386 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:15:19.163408 1456658 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:15:19.163483 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:19.198187 1456658 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:15:19.198208 1456658 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:15:19.198271 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:19.222171 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:19.228269 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:19.236013 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:19.535388 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:15:19.535412 1456658 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:15:19.603764 1456658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:15:19.620433 1456658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:15:19.652063 1456658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:15:16.750286 1455463 out.go:252]   - Generating certificates and keys ...
	I1002 22:15:16.750388 1455463 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:15:16.750467 1455463 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:15:16.925038 1455463 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 22:15:18.243893 1455463 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 22:15:18.407058 1455463 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 22:15:18.978675 1455463 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 22:15:20.385527 1455463 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 22:15:20.385861 1455463 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-230628 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:15:20.678554 1455463 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 22:15:20.679137 1455463 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-230628 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:15:21.116855 1455463 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 22:15:21.544475 1455463 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 22:15:19.771580 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:15:19.771606 1456658 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:15:19.928346 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:15:19.928373 1456658 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:15:20.056457 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:15:20.056527 1456658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:15:20.161666 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:15:20.161737 1456658 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:15:20.211551 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:15:20.211623 1456658 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:15:20.252349 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:15:20.252420 1456658 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:15:20.288346 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:15:20.288414 1456658 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:15:20.319345 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:15:20.319409 1456658 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:15:20.363121 1456658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:15:22.347270 1455463 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:15:22.347820 1455463 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:15:22.755674 1455463 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:15:23.442143 1455463 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:15:23.652976 1455463 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:15:24.449573 1455463 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:15:25.364671 1455463 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:15:25.365816 1455463 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:15:25.368956 1455463 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:15:25.372471 1455463 out.go:252]   - Booting up control plane ...
	I1002 22:15:25.372585 1455463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:15:25.377080 1455463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:15:25.380676 1455463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:15:25.411081 1455463 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:15:25.411193 1455463 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:15:25.425436 1455463 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:15:25.425537 1455463 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:15:25.425584 1455463 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:15:25.666465 1455463 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:15:25.666590 1455463 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:15:30.716345 1456658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.095858218s)
	I1002 22:15:30.716400 1456658 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.064306851s)
	I1002 22:15:30.716533 1456658 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-173127" to be "Ready" ...
	I1002 22:15:30.717875 1456658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.114068825s)
	I1002 22:15:30.762652 1456658 node_ready.go:49] node "old-k8s-version-173127" is "Ready"
	I1002 22:15:30.762677 1456658 node_ready.go:38] duration metric: took 46.035769ms for node "old-k8s-version-173127" to be "Ready" ...
	I1002 22:15:30.762689 1456658 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:15:30.762747 1456658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:15:31.535052 1456658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.171845805s)
	I1002 22:15:31.535148 1456658 api_server.go:72] duration metric: took 12.44163991s to wait for apiserver process to appear ...
	I1002 22:15:31.535353 1456658 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:15:31.535374 1456658 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:15:31.537883 1456658 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-173127 addons enable metrics-server
	
	I1002 22:15:31.540930 1456658 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 22:15:28.154992 1455463 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.501656674s
	I1002 22:15:28.158423 1455463 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:15:28.158735 1455463 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1002 22:15:28.159028 1455463 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:15:28.159968 1455463 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:15:31.544802 1456658 addons.go:514] duration metric: took 12.450979691s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 22:15:31.550933 1456658 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:15:31.552374 1456658 api_server.go:141] control plane version: v1.28.0
	I1002 22:15:31.552395 1456658 api_server.go:131] duration metric: took 17.034298ms to wait for apiserver health ...
	I1002 22:15:31.552404 1456658 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:15:31.556113 1456658 system_pods.go:59] 8 kube-system pods found
	I1002 22:15:31.556206 1456658 system_pods.go:61] "coredns-5dd5756b68-78sbd" [f75699fd-ea3a-48d9-8ed2-5b44e003cb58] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:15:31.556228 1456658 system_pods.go:61] "etcd-old-k8s-version-173127" [225299ab-3de1-4fd9-8ada-c55b33128e67] Running
	I1002 22:15:31.556263 1456658 system_pods.go:61] "kindnet-xtlhd" [e972a3fc-03ef-437a-a0d6-3f7337f3a2e7] Running
	I1002 22:15:31.556289 1456658 system_pods.go:61] "kube-apiserver-old-k8s-version-173127" [6594b352-cc57-4883-8a6c-85b0e820eac2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:15:31.556311 1456658 system_pods.go:61] "kube-controller-manager-old-k8s-version-173127" [590d5ae8-5a2b-4db5-9da2-75793a01dd30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:15:31.556349 1456658 system_pods.go:61] "kube-proxy-86prs" [b1d34de9-8156-4726-b410-78c5d3ca9beb] Running
	I1002 22:15:31.556376 1456658 system_pods.go:61] "kube-scheduler-old-k8s-version-173127" [4e4be49a-3b08-4188-9b69-9f0618db154c] Running
	I1002 22:15:31.556398 1456658 system_pods.go:61] "storage-provisioner" [4434574b-8c2c-4a2a-b3c5-60122fa77e43] Running
	I1002 22:15:31.556431 1456658 system_pods.go:74] duration metric: took 4.020425ms to wait for pod list to return data ...
	I1002 22:15:31.556460 1456658 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:15:31.559346 1456658 default_sa.go:45] found service account: "default"
	I1002 22:15:31.559407 1456658 default_sa.go:55] duration metric: took 2.925716ms for default service account to be created ...
	I1002 22:15:31.559447 1456658 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:15:31.565947 1456658 system_pods.go:86] 8 kube-system pods found
	I1002 22:15:31.566021 1456658 system_pods.go:89] "coredns-5dd5756b68-78sbd" [f75699fd-ea3a-48d9-8ed2-5b44e003cb58] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:15:31.566062 1456658 system_pods.go:89] "etcd-old-k8s-version-173127" [225299ab-3de1-4fd9-8ada-c55b33128e67] Running
	I1002 22:15:31.566086 1456658 system_pods.go:89] "kindnet-xtlhd" [e972a3fc-03ef-437a-a0d6-3f7337f3a2e7] Running
	I1002 22:15:31.566107 1456658 system_pods.go:89] "kube-apiserver-old-k8s-version-173127" [6594b352-cc57-4883-8a6c-85b0e820eac2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:15:31.566150 1456658 system_pods.go:89] "kube-controller-manager-old-k8s-version-173127" [590d5ae8-5a2b-4db5-9da2-75793a01dd30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:15:31.566176 1456658 system_pods.go:89] "kube-proxy-86prs" [b1d34de9-8156-4726-b410-78c5d3ca9beb] Running
	I1002 22:15:31.566196 1456658 system_pods.go:89] "kube-scheduler-old-k8s-version-173127" [4e4be49a-3b08-4188-9b69-9f0618db154c] Running
	I1002 22:15:31.566228 1456658 system_pods.go:89] "storage-provisioner" [4434574b-8c2c-4a2a-b3c5-60122fa77e43] Running
	I1002 22:15:31.566253 1456658 system_pods.go:126] duration metric: took 6.784848ms to wait for k8s-apps to be running ...
	I1002 22:15:31.566274 1456658 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:15:31.566358 1456658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:15:31.588622 1456658 system_svc.go:56] duration metric: took 22.338188ms WaitForService to wait for kubelet
	I1002 22:15:31.588650 1456658 kubeadm.go:586] duration metric: took 12.495141254s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:15:31.588668 1456658 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:15:31.594359 1456658 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:15:31.594441 1456658 node_conditions.go:123] node cpu capacity is 2
	I1002 22:15:31.594468 1456658 node_conditions.go:105] duration metric: took 5.793489ms to run NodePressure ...
	I1002 22:15:31.594493 1456658 start.go:241] waiting for startup goroutines ...
	I1002 22:15:31.594526 1456658 start.go:246] waiting for cluster config update ...
	I1002 22:15:31.594555 1456658 start.go:255] writing updated cluster config ...
	I1002 22:15:31.594890 1456658 ssh_runner.go:195] Run: rm -f paused
	I1002 22:15:31.599261 1456658 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:15:31.607281 1456658 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-78sbd" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 22:15:33.614942 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	I1002 22:15:32.746490 1455463 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.586025458s
	I1002 22:15:34.810074 1455463 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.649351715s
	I1002 22:15:36.661155 1455463 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.501736021s
	I1002 22:15:36.681682 1455463 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 22:15:36.698789 1455463 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 22:15:36.715122 1455463 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 22:15:36.715624 1455463 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-230628 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 22:15:36.738680 1455463 kubeadm.go:318] [bootstrap-token] Using token: rz2xxr.tp4rjlg1n4owddq6
	I1002 22:15:36.741583 1455463 out.go:252]   - Configuring RBAC rules ...
	I1002 22:15:36.741715 1455463 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 22:15:36.749548 1455463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 22:15:36.767244 1455463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 22:15:36.773322 1455463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 22:15:36.778338 1455463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 22:15:36.785949 1455463 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 22:15:37.070380 1455463 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 22:15:37.505203 1455463 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 22:15:38.069819 1455463 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 22:15:38.071618 1455463 kubeadm.go:318] 
	I1002 22:15:38.071699 1455463 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 22:15:38.071705 1455463 kubeadm.go:318] 
	I1002 22:15:38.071786 1455463 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 22:15:38.071791 1455463 kubeadm.go:318] 
	I1002 22:15:38.071837 1455463 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 22:15:38.071908 1455463 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 22:15:38.071961 1455463 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 22:15:38.071966 1455463 kubeadm.go:318] 
	I1002 22:15:38.072029 1455463 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 22:15:38.072072 1455463 kubeadm.go:318] 
	I1002 22:15:38.072123 1455463 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 22:15:38.072127 1455463 kubeadm.go:318] 
	I1002 22:15:38.072183 1455463 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 22:15:38.072262 1455463 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 22:15:38.072333 1455463 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 22:15:38.072338 1455463 kubeadm.go:318] 
	I1002 22:15:38.072438 1455463 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 22:15:38.072519 1455463 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 22:15:38.072523 1455463 kubeadm.go:318] 
	I1002 22:15:38.072611 1455463 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token rz2xxr.tp4rjlg1n4owddq6 \
	I1002 22:15:38.072718 1455463 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 22:15:38.072740 1455463 kubeadm.go:318] 	--control-plane 
	I1002 22:15:38.072745 1455463 kubeadm.go:318] 
	I1002 22:15:38.072834 1455463 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 22:15:38.072839 1455463 kubeadm.go:318] 
	I1002 22:15:38.072924 1455463 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token rz2xxr.tp4rjlg1n4owddq6 \
	I1002 22:15:38.073031 1455463 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 22:15:38.077413 1455463 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:15:38.077657 1455463 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:15:38.077779 1455463 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:15:38.077805 1455463 cni.go:84] Creating CNI manager for ""
	I1002 22:15:38.077814 1455463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:15:38.081103 1455463 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1002 22:15:36.115162 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:38.116012 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	I1002 22:15:38.084222 1455463 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:15:38.089365 1455463 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 22:15:38.089389 1455463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 22:15:38.107473 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:15:38.443975 1455463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:15:38.444088 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:38.444200 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-230628 minikube.k8s.io/updated_at=2025_10_02T22_15_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=default-k8s-diff-port-230628 minikube.k8s.io/primary=true
	I1002 22:15:38.604038 1455463 ops.go:34] apiserver oom_adj: -16
	I1002 22:15:38.604160 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:39.104899 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:39.604972 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:40.104297 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:40.604718 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:41.104668 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:41.604673 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:42.105255 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:42.351543 1455463 kubeadm.go:1113] duration metric: took 3.907517019s to wait for elevateKubeSystemPrivileges
	I1002 22:15:42.351582 1455463 kubeadm.go:402] duration metric: took 25.997505097s to StartCluster
	I1002 22:15:42.351600 1455463 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:42.351669 1455463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:15:42.352836 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:42.353074 1455463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:15:42.353167 1455463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:15:42.353418 1455463 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:15:42.353468 1455463 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:15:42.353543 1455463 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-230628"
	I1002 22:15:42.353556 1455463 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-230628"
	I1002 22:15:42.353586 1455463 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:15:42.354141 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:42.354493 1455463 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-230628"
	I1002 22:15:42.354514 1455463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-230628"
	I1002 22:15:42.354800 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:42.356478 1455463 out.go:179] * Verifying Kubernetes components...
	I1002 22:15:42.367009 1455463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:42.395857 1455463 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-230628"
	I1002 22:15:42.395898 1455463 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:15:42.398720 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:42.406763 1455463 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:15:42.409899 1455463 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:15:42.409921 1455463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:15:42.409983 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:42.458101 1455463 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:15:42.458124 1455463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:15:42.458199 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:42.473455 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:42.504177 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:42.977660 1455463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:15:43.069700 1455463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:15:43.130232 1455463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:15:43.130503 1455463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 22:15:44.633926 1455463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.503373132s)
	I1002 22:15:44.633958 1455463 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 22:15:44.635067 1455463 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.504760422s)
	I1002 22:15:44.635988 1455463 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-230628" to be "Ready" ...
	I1002 22:15:44.636417 1455463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.566663708s)
	I1002 22:15:44.639701 1455463 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1002 22:15:40.614122 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:42.617854 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	I1002 22:15:44.643898 1455463 addons.go:514] duration metric: took 2.290407921s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 22:15:45.157399 1455463 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-230628" context rescaled to 1 replicas
	W1002 22:15:45.116177 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:47.616549 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:46.639665 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:15:49.141848 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:15:51.143818 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:15:50.113642 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:52.114300 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:54.612839 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:53.639377 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:15:56.139389 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:15:56.613750 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:58.613901 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:58.139877 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:00.195160 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:00.614601 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:16:03.114600 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:16:02.639680 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:05.139702 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:05.613081 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:16:07.615019 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	I1002 22:16:08.113225 1456658 pod_ready.go:94] pod "coredns-5dd5756b68-78sbd" is "Ready"
	I1002 22:16:08.113257 1456658 pod_ready.go:86] duration metric: took 36.505894744s for pod "coredns-5dd5756b68-78sbd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.116927 1456658 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.124639 1456658 pod_ready.go:94] pod "etcd-old-k8s-version-173127" is "Ready"
	I1002 22:16:08.124666 1456658 pod_ready.go:86] duration metric: took 7.714554ms for pod "etcd-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.128821 1456658 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.140885 1456658 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-173127" is "Ready"
	I1002 22:16:08.140916 1456658 pod_ready.go:86] duration metric: took 12.067994ms for pod "kube-apiserver-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.146101 1456658 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.311026 1456658 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-173127" is "Ready"
	I1002 22:16:08.311057 1456658 pod_ready.go:86] duration metric: took 164.927706ms for pod "kube-controller-manager-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.512074 1456658 pod_ready.go:83] waiting for pod "kube-proxy-86prs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.911724 1456658 pod_ready.go:94] pod "kube-proxy-86prs" is "Ready"
	I1002 22:16:08.911751 1456658 pod_ready.go:86] duration metric: took 399.651667ms for pod "kube-proxy-86prs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:09.111723 1456658 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:09.511330 1456658 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-173127" is "Ready"
	I1002 22:16:09.511355 1456658 pod_ready.go:86] duration metric: took 399.554306ms for pod "kube-scheduler-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:09.511367 1456658 pod_ready.go:40] duration metric: took 37.912028834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:16:09.571232 1456658 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1002 22:16:09.574565 1456658 out.go:203] 
	W1002 22:16:09.577504 1456658 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1002 22:16:09.580525 1456658 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1002 22:16:09.583501 1456658 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-173127" cluster and "default" namespace by default
	W1002 22:16:07.140175 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:09.141142 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:11.639989 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:14.140227 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:16.140374 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:18.638802 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:20.639073 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.790406326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.799196561Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.799989359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.815927185Z" level=info msg="Created container e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk/dashboard-metrics-scraper" id=8ea47ded-df5c-4dd1-a536-f3b5bec54e7d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.816767145Z" level=info msg="Starting container: e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d" id=a59cc8f1-888a-4630-adaa-a4b3d8937508 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.820188623Z" level=info msg="Started container" PID=1639 containerID=e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk/dashboard-metrics-scraper id=a59cc8f1-888a-4630-adaa-a4b3d8937508 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc46f91e350134f8f72091c652ba9298c4550ce8137306b9ef70612e59ba96d2
	Oct 02 22:16:02 old-k8s-version-173127 conmon[1637]: conmon e878da3125cc7504cca8 <ninfo>: container 1639 exited with status 1
	Oct 02 22:16:03 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:03.116506443Z" level=info msg="Removing container: 01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e" id=b59f70b3-22dd-487f-9287-99e1f361e4ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:16:03 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:03.123793271Z" level=info msg="Error loading conmon cgroup of container 01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e: cgroup deleted" id=b59f70b3-22dd-487f-9287-99e1f361e4ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:16:03 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:03.12715569Z" level=info msg="Removed container 01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk/dashboard-metrics-scraper" id=b59f70b3-22dd-487f-9287-99e1f361e4ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.515177019Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.523105395Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.523145558Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.523177328Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.526413381Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.526448687Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.526470718Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.529840906Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.529879084Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.529902082Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.53378587Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.53382358Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.533845331Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.538082359Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.538120791Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e878da3125cc7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   cc46f91e35013       dashboard-metrics-scraper-5f989dc9cf-zm7zk       kubernetes-dashboard
	262c0c70277ce       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   20e90aec323ec       storage-provisioner                              kube-system
	9c9d715891fb1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   5825d7223aec8       kubernetes-dashboard-8694d4445c-vlglj            kubernetes-dashboard
	4fd4342de7847       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           57 seconds ago       Running             coredns                     1                   656cff9e4f77d       coredns-5dd5756b68-78sbd                         kube-system
	973a833ef3a2e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   4f2eddc047af2       busybox                                          default
	084caaaf7525a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           57 seconds ago       Running             kube-proxy                  1                   7a4d1b38f17ac       kube-proxy-86prs                                 kube-system
	63792d5054495       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   20e90aec323ec       storage-provisioner                              kube-system
	85ae11f7046e7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   db121931dc350       kindnet-xtlhd                                    kube-system
	127dfbf0ed811       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   8e67e54766172       kube-controller-manager-old-k8s-version-173127   kube-system
	9248a46e17c68       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   2d1a5b594be82       etcd-old-k8s-version-173127                      kube-system
	b311e5165e9b9       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   5876288b79169       kube-scheduler-old-k8s-version-173127            kube-system
	ce52286ca959a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   4326f4b4097b9       kube-apiserver-old-k8s-version-173127            kube-system
	
	
	==> coredns [4fd4342de784742d74c842c78faaa45d24380c2b83bdf44da664a2c16a2289d7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53574 - 30820 "HINFO IN 4664996271170584488.1405436629196325381. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012015901s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-173127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-173127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=old-k8s-version-173127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_14_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-173127
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:16:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:16:18 +0000   Thu, 02 Oct 2025 22:14:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:16:18 +0000   Thu, 02 Oct 2025 22:14:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:16:18 +0000   Thu, 02 Oct 2025 22:14:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:16:18 +0000   Thu, 02 Oct 2025 22:14:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-173127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 f592df572e62434f8f004b57bfa02bb2
	  System UUID:                58645267-b6b4-4674-bdc1-8f78d84fc839
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-5dd5756b68-78sbd                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     118s
	  kube-system                 etcd-old-k8s-version-173127                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m10s
	  kube-system                 kindnet-xtlhd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-old-k8s-version-173127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-old-k8s-version-173127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-proxy-86prs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-old-k8s-version-173127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-zm7zk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-vlglj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 116s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s              kubelet          Node old-k8s-version-173127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s              kubelet          Node old-k8s-version-173127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s              kubelet          Node old-k8s-version-173127 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m10s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           119s               node-controller  Node old-k8s-version-173127 event: Registered Node old-k8s-version-173127 in Controller
	  Normal  NodeReady                103s               kubelet          Node old-k8s-version-173127 status is now: NodeReady
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node old-k8s-version-173127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node old-k8s-version-173127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x8 over 68s)  kubelet          Node old-k8s-version-173127 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-173127 event: Registered Node old-k8s-version-173127 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9248a46e17c68748b0f35e7cdc9ae68afe7904438ef405050db309d6ad51f90b] <==
	{"level":"info","ts":"2025-10-02T22:15:18.822192Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T22:15:18.8222Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T22:15:18.822517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-02T22:15:18.822569Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-02T22:15:18.833052Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T22:15:18.843586Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T22:15:18.84364Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T22:15:18.845356Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-02T22:15:18.845379Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-02T22:15:18.87724Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T22:15:18.881582Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T22:15:20.566083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-02T22:15:20.566197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-02T22:15:20.566245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-02T22:15:20.566284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-02T22:15:20.566318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-02T22:15:20.566357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-02T22:15:20.566386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-02T22:15:20.57022Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-173127 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T22:15:20.570315Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T22:15:20.571268Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-02T22:15:20.574085Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T22:15:20.574999Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-02T22:15:20.582645Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T22:15:20.582716Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:16:25 up  6:58,  0 user,  load average: 3.25, 2.21, 2.01
	Linux old-k8s-version-173127 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [85ae11f7046e7c4088564223c92310bf84ed1291c50dba819b3c63aa0fab5bab] <==
	I1002 22:15:28.257537       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:15:28.257796       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 22:15:28.257935       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:15:28.257946       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:15:28.257957       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:15:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:15:28.514676       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:15:28.514820       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:15:28.514859       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:15:28.515649       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:15:58.518178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:15:58.518185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:15:58.518364       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 22:15:58.518419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 22:16:00.015715       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:16:00.015761       1 metrics.go:72] Registering metrics
	I1002 22:16:00.015844       1 controller.go:711] "Syncing nftables rules"
	I1002 22:16:08.514824       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:16:08.514884       1 main.go:301] handling current node
	I1002 22:16:18.514573       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:16:18.514605       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce52286ca959aa1094c5360e368796a7e4a0f842b8a4a79f996f8d6b468e669c] <==
	I1002 22:15:27.328167       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:15:27.350682       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 22:15:27.354760       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 22:15:27.354853       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 22:15:27.354976       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:15:27.361371       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 22:15:27.365009       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 22:15:27.369980       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 22:15:27.370676       1 aggregator.go:166] initial CRD sync complete...
	I1002 22:15:27.370731       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 22:15:27.370761       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:15:27.370795       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:15:27.375777       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1002 22:15:27.438118       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 22:15:27.892052       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:15:31.269042       1 controller.go:624] quota admission added evaluator for: namespaces
	I1002 22:15:31.354991       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 22:15:31.390339       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:15:31.404304       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:15:31.419169       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 22:15:31.487145       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.227.172"}
	I1002 22:15:31.525072       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.175.107"}
	I1002 22:15:41.259430       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 22:15:41.289726       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 22:15:41.341479       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [127dfbf0ed811f4c5b76702b20dd8c06a78db529e7c9966d4050c03afa84f582] <==
	I1002 22:15:41.298877       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-zm7zk"
	I1002 22:15:41.312255       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-vlglj"
	I1002 22:15:41.312557       1 shared_informer.go:318] Caches are synced for stateful set
	I1002 22:15:41.326388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.398015ms"
	I1002 22:15:41.334488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="56.40859ms"
	I1002 22:15:41.347113       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1002 22:15:41.362162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.546916ms"
	I1002 22:15:41.362249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.714µs"
	I1002 22:15:41.381684       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="262.724µs"
	I1002 22:15:41.388751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.244679ms"
	I1002 22:15:41.389571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.896µs"
	I1002 22:15:41.398246       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 22:15:41.401917       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 22:15:41.798199       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 22:15:41.798244       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 22:15:41.800770       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 22:15:48.015782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="476.652µs"
	I1002 22:15:49.033313       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.655µs"
	I1002 22:15:50.037402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.162µs"
	I1002 22:15:53.075721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.083985ms"
	I1002 22:15:53.075827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.851µs"
	I1002 22:16:04.141305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.553µs"
	I1002 22:16:07.946474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.817166ms"
	I1002 22:16:07.947388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.006µs"
	I1002 22:16:11.637052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.522µs"
	
	
	==> kube-proxy [084caaaf7525a0c67194fc9d1407cd6fe2a876234d337900e84141524d04d741] <==
	I1002 22:15:30.196002       1 server_others.go:69] "Using iptables proxy"
	I1002 22:15:30.243721       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1002 22:15:30.998456       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:15:31.015834       1 server_others.go:152] "Using iptables Proxier"
	I1002 22:15:31.015939       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 22:15:31.015970       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 22:15:31.018172       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 22:15:31.018479       1 server.go:846] "Version info" version="v1.28.0"
	I1002 22:15:31.042119       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:15:31.043547       1 config.go:188] "Starting service config controller"
	I1002 22:15:31.051494       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 22:15:31.059113       1 config.go:315] "Starting node config controller"
	I1002 22:15:31.059141       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 22:15:31.043775       1 config.go:97] "Starting endpoint slice config controller"
	I1002 22:15:31.078693       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 22:15:31.151716       1 shared_informer.go:318] Caches are synced for service config
	I1002 22:15:31.160519       1 shared_informer.go:318] Caches are synced for node config
	I1002 22:15:31.179348       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b311e5165e9b94b09ec1ae5755a3fa63d5356ede8a5b8e26c85b073fad8c5351] <==
	I1002 22:15:25.088278       1 serving.go:348] Generated self-signed cert in-memory
	I1002 22:15:30.446722       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1002 22:15:30.446760       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:15:30.465521       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 22:15:30.465824       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 22:15:30.465892       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 22:15:30.466292       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 22:15:30.479043       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:15:30.479142       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 22:15:30.491190       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:15:30.491215       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 22:15:30.574257       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1002 22:15:30.615234       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 22:15:30.615370       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: I1002 22:15:41.427800     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/988e0d52-fae3-445f-bc46-8ed21d729763-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-zm7zk\" (UID: \"988e0d52-fae3-445f-bc46-8ed21d729763\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk"
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: I1002 22:15:41.427859     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czx5t\" (UniqueName: \"kubernetes.io/projected/988e0d52-fae3-445f-bc46-8ed21d729763-kube-api-access-czx5t\") pod \"dashboard-metrics-scraper-5f989dc9cf-zm7zk\" (UID: \"988e0d52-fae3-445f-bc46-8ed21d729763\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk"
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: I1002 22:15:41.427894     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lblj\" (UniqueName: \"kubernetes.io/projected/02d32051-a965-4fa4-9a6e-e03d13faab7d-kube-api-access-8lblj\") pod \"kubernetes-dashboard-8694d4445c-vlglj\" (UID: \"02d32051-a965-4fa4-9a6e-e03d13faab7d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vlglj"
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: I1002 22:15:41.427922     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/02d32051-a965-4fa4-9a6e-e03d13faab7d-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-vlglj\" (UID: \"02d32051-a965-4fa4-9a6e-e03d13faab7d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vlglj"
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: W1002 22:15:41.675035     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/crio-cc46f91e350134f8f72091c652ba9298c4550ce8137306b9ef70612e59ba96d2 WatchSource:0}: Error finding container cc46f91e350134f8f72091c652ba9298c4550ce8137306b9ef70612e59ba96d2: Status 404 returned error can't find the container with id cc46f91e350134f8f72091c652ba9298c4550ce8137306b9ef70612e59ba96d2
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: W1002 22:15:41.710876     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/crio-5825d7223aec814779c0a31836e476f489f086fdf090ae03755317648c12db17 WatchSource:0}: Error finding container 5825d7223aec814779c0a31836e476f489f086fdf090ae03755317648c12db17: Status 404 returned error can't find the container with id 5825d7223aec814779c0a31836e476f489f086fdf090ae03755317648c12db17
	Oct 02 22:15:47 old-k8s-version-173127 kubelet[779]: I1002 22:15:47.998379     779 scope.go:117] "RemoveContainer" containerID="6aeee2dc53691b4d67502dc8c2b2f4cf3b42cd174f6bcc05df2a2f4f883bfc00"
	Oct 02 22:15:49 old-k8s-version-173127 kubelet[779]: I1002 22:15:49.002816     779 scope.go:117] "RemoveContainer" containerID="6aeee2dc53691b4d67502dc8c2b2f4cf3b42cd174f6bcc05df2a2f4f883bfc00"
	Oct 02 22:15:49 old-k8s-version-173127 kubelet[779]: I1002 22:15:49.003758     779 scope.go:117] "RemoveContainer" containerID="01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e"
	Oct 02 22:15:49 old-k8s-version-173127 kubelet[779]: E1002 22:15:49.004104     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zm7zk_kubernetes-dashboard(988e0d52-fae3-445f-bc46-8ed21d729763)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk" podUID="988e0d52-fae3-445f-bc46-8ed21d729763"
	Oct 02 22:15:50 old-k8s-version-173127 kubelet[779]: I1002 22:15:50.016110     779 scope.go:117] "RemoveContainer" containerID="01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e"
	Oct 02 22:15:50 old-k8s-version-173127 kubelet[779]: E1002 22:15:50.016403     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zm7zk_kubernetes-dashboard(988e0d52-fae3-445f-bc46-8ed21d729763)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk" podUID="988e0d52-fae3-445f-bc46-8ed21d729763"
	Oct 02 22:15:51 old-k8s-version-173127 kubelet[779]: I1002 22:15:51.621760     779 scope.go:117] "RemoveContainer" containerID="01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e"
	Oct 02 22:15:51 old-k8s-version-173127 kubelet[779]: E1002 22:15:51.622531     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zm7zk_kubernetes-dashboard(988e0d52-fae3-445f-bc46-8ed21d729763)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk" podUID="988e0d52-fae3-445f-bc46-8ed21d729763"
	Oct 02 22:16:00 old-k8s-version-173127 kubelet[779]: I1002 22:16:00.107758     779 scope.go:117] "RemoveContainer" containerID="63792d50544950ac3771e739f5e2e88e49ec0b557f03faae01aa93ce906075d5"
	Oct 02 22:16:00 old-k8s-version-173127 kubelet[779]: I1002 22:16:00.332172     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vlglj" podStartSLOduration=8.648447374 podCreationTimestamp="2025-10-02 22:15:41 +0000 UTC" firstStartedPulling="2025-10-02 22:15:41.7236844 +0000 UTC m=+24.258080296" lastFinishedPulling="2025-10-02 22:15:52.407301081 +0000 UTC m=+34.941696969" observedRunningTime="2025-10-02 22:15:53.042844062 +0000 UTC m=+35.577239950" watchObservedRunningTime="2025-10-02 22:16:00.332064047 +0000 UTC m=+42.866459943"
	Oct 02 22:16:02 old-k8s-version-173127 kubelet[779]: I1002 22:16:02.786977     779 scope.go:117] "RemoveContainer" containerID="01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e"
	Oct 02 22:16:03 old-k8s-version-173127 kubelet[779]: I1002 22:16:03.115159     779 scope.go:117] "RemoveContainer" containerID="01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e"
	Oct 02 22:16:04 old-k8s-version-173127 kubelet[779]: I1002 22:16:04.119637     779 scope.go:117] "RemoveContainer" containerID="e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d"
	Oct 02 22:16:04 old-k8s-version-173127 kubelet[779]: E1002 22:16:04.120403     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zm7zk_kubernetes-dashboard(988e0d52-fae3-445f-bc46-8ed21d729763)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk" podUID="988e0d52-fae3-445f-bc46-8ed21d729763"
	Oct 02 22:16:11 old-k8s-version-173127 kubelet[779]: I1002 22:16:11.622177     779 scope.go:117] "RemoveContainer" containerID="e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d"
	Oct 02 22:16:11 old-k8s-version-173127 kubelet[779]: E1002 22:16:11.622954     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zm7zk_kubernetes-dashboard(988e0d52-fae3-445f-bc46-8ed21d729763)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk" podUID="988e0d52-fae3-445f-bc46-8ed21d729763"
	Oct 02 22:16:22 old-k8s-version-173127 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:16:22 old-k8s-version-173127 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:16:22 old-k8s-version-173127 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9c9d715891fb1fe0c652d51c4da130ea0afd5634cc063f2c6ba51a847dbbf57f] <==
	2025/10/02 22:15:52 Using namespace: kubernetes-dashboard
	2025/10/02 22:15:52 Using in-cluster config to connect to apiserver
	2025/10/02 22:15:52 Using secret token for csrf signing
	2025/10/02 22:15:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 22:15:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 22:15:52 Successful initial request to the apiserver, version: v1.28.0
	2025/10/02 22:15:52 Generating JWE encryption key
	2025/10/02 22:15:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 22:15:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 22:15:53 Initializing JWE encryption key from synchronized object
	2025/10/02 22:15:53 Creating in-cluster Sidecar client
	2025/10/02 22:15:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:15:53 Serving insecurely on HTTP port: 9090
	2025/10/02 22:16:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:15:52 Starting overwatch
	
	
	==> storage-provisioner [262c0c70277ce895db2bcada5bdf1a9907c5b11a07877c05a26b7ca693ce2690] <==
	I1002 22:16:00.377518       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 22:16:00.411245       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:16:00.425726       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 22:16:17.827063       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:16:17.827248       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-173127_375fcc10-7709-4208-9fe8-5bcbd61471d5!
	I1002 22:16:17.827929       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3777a98b-9fef-490e-ac94-3a602207c6ab", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-173127_375fcc10-7709-4208-9fe8-5bcbd61471d5 became leader
	I1002 22:16:17.931590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-173127_375fcc10-7709-4208-9fe8-5bcbd61471d5!
	
	
	==> storage-provisioner [63792d50544950ac3771e739f5e2e88e49ec0b557f03faae01aa93ce906075d5] <==
	I1002 22:15:28.978896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 22:15:59.050542       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-173127 -n old-k8s-version-173127
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-173127 -n old-k8s-version-173127: exit status 2 (435.54852ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-173127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-173127
helpers_test.go:243: (dbg) docker inspect old-k8s-version-173127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481",
	        "Created": "2025-10-02T22:13:49.766969826Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1456785,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:15:09.999386237Z",
	            "FinishedAt": "2025-10-02T22:15:09.222110959Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/hostname",
	        "HostsPath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/hosts",
	        "LogPath": "/var/lib/docker/containers/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481-json.log",
	        "Name": "/old-k8s-version-173127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-173127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-173127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481",
	                "LowerDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f1c1faf80a93084300c76daadc560c80d4d938e61736131a705e67fe4c912fdb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-173127",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-173127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-173127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-173127",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-173127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0a06c339277a808948508cef3e15a092d700f95e921430784789696309d1a0c0",
	            "SandboxKey": "/var/run/docker/netns/0a06c339277a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34561"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34562"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34565"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34563"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34564"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-173127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:22:80:54:1c:4d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6008a0e16210e1fdcf0e30a954f2bad61c0505195953a96ceceb44b75081115d",
	                    "EndpointID": "ce952d0b0e1a3a108d1d82da1dae3ec14482561c2cacf1372d8adc5a37c1aa6b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-173127",
	                        "a2aece711092"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-173127 -n old-k8s-version-173127
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-173127 -n old-k8s-version-173127: exit status 2 (388.519614ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-173127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-173127 logs -n 25: (1.500349589s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-198170 sudo containerd config dump                                                                                                                                                                                                  │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ -p cilium-198170 sudo crio config                                                                                                                                                                                                             │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ delete  │ -p cilium-198170                                                                                                                                                                                                                              │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │ 02 Oct 25 22:04 UTC │
	│ start   │ -p force-systemd-env-915858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-915858     │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ force-systemd-flag-292135 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-292135    │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	│ delete  │ -p force-systemd-flag-292135                                                                                                                                                                                                                  │ force-systemd-flag-292135    │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:11 UTC │
	│ delete  │ -p force-systemd-env-915858                                                                                                                                                                                                                   │ force-systemd-env-915858     │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p cert-options-280401 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ cert-options-280401 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ -p cert-options-280401 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ delete  │ -p cert-options-280401                                                                                                                                                                                                                        │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:14 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-173127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │                     │
	│ stop    │ -p old-k8s-version-173127 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ delete  │ -p cert-expiration-247949                                                                                                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-173127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ image   │ old-k8s-version-173127 image list --format=json                                                                                                                                                                                               │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ pause   │ -p old-k8s-version-173127 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:15:09
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:15:09.732996 1456658 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:15:09.733150 1456658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:15:09.733160 1456658 out.go:374] Setting ErrFile to fd 2...
	I1002 22:15:09.733166 1456658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:15:09.733418 1456658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:15:09.733796 1456658 out.go:368] Setting JSON to false
	I1002 22:15:09.734695 1456658 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25035,"bootTime":1759418275,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:15:09.734769 1456658 start.go:140] virtualization:  
	I1002 22:15:09.737621 1456658 out.go:179] * [old-k8s-version-173127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:15:09.741518 1456658 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:15:09.741629 1456658 notify.go:220] Checking for updates...
	I1002 22:15:09.747512 1456658 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:15:09.750470 1456658 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:15:09.753420 1456658 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:15:09.756288 1456658 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:15:09.759163 1456658 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:15:09.762787 1456658 config.go:182] Loaded profile config "old-k8s-version-173127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 22:15:09.766395 1456658 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1002 22:15:09.769218 1456658 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:15:09.790324 1456658 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:15:09.790488 1456658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:15:09.852562 1456658 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 22:15:09.843037902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:15:09.852671 1456658 docker.go:318] overlay module found
	I1002 22:15:09.855879 1456658 out.go:179] * Using the docker driver based on existing profile
	I1002 22:15:09.858759 1456658 start.go:304] selected driver: docker
	I1002 22:15:09.858799 1456658 start.go:924] validating driver "docker" against &{Name:old-k8s-version-173127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-173127 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:15:09.858901 1456658 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:15:09.859657 1456658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:15:09.914697 1456658 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 22:15:09.903897057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:15:09.915046 1456658 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:15:09.915083 1456658 cni.go:84] Creating CNI manager for ""
	I1002 22:15:09.915146 1456658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:15:09.915191 1456658 start.go:348] cluster config:
	{Name:old-k8s-version-173127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-173127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:15:09.918497 1456658 out.go:179] * Starting "old-k8s-version-173127" primary control-plane node in "old-k8s-version-173127" cluster
	I1002 22:15:09.921444 1456658 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:15:09.924417 1456658 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:15:09.927316 1456658 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 22:15:09.927382 1456658 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1002 22:15:09.927413 1456658 cache.go:58] Caching tarball of preloaded images
	I1002 22:15:09.927413 1456658 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:15:09.927500 1456658 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:15:09.927510 1456658 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1002 22:15:09.927643 1456658 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/config.json ...
	I1002 22:15:09.947751 1456658 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:15:09.947770 1456658 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:15:09.947796 1456658 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:15:09.947826 1456658 start.go:360] acquireMachinesLock for old-k8s-version-173127: {Name:mk8e3605aaf356e5fa6d09b06d4a1c1e3fe0450d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:15:09.947886 1456658 start.go:364] duration metric: took 41.747µs to acquireMachinesLock for "old-k8s-version-173127"
	I1002 22:15:09.947909 1456658 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:15:09.947914 1456658 fix.go:54] fixHost starting: 
	I1002 22:15:09.948179 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:09.965092 1456658 fix.go:112] recreateIfNeeded on old-k8s-version-173127: state=Stopped err=<nil>
	W1002 22:15:09.965120 1456658 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:15:06.878636 1455463 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-230628:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.34406054s)
	I1002 22:15:06.878669 1455463 kic.go:203] duration metric: took 4.344208516s to extract preloaded images to volume ...
	W1002 22:15:06.878812 1455463 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 22:15:06.878933 1455463 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 22:15:06.929969 1455463 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-230628 --name default-k8s-diff-port-230628 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-230628 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-230628 --network default-k8s-diff-port-230628 --ip 192.168.76.2 --volume default-k8s-diff-port-230628:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 22:15:07.247574 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Running}}
	I1002 22:15:07.272126 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:07.295083 1455463 cli_runner.go:164] Run: docker exec default-k8s-diff-port-230628 stat /var/lib/dpkg/alternatives/iptables
	I1002 22:15:07.349052 1455463 oci.go:144] the created container "default-k8s-diff-port-230628" has a running status.
	I1002 22:15:07.349093 1455463 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa...
	I1002 22:15:07.607788 1455463 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 22:15:07.636776 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:07.656551 1455463 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 22:15:07.656570 1455463 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-230628 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 22:15:07.717251 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:07.747629 1455463 machine.go:93] provisionDockerMachine start ...
	I1002 22:15:07.747730 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:07.769412 1455463 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:07.769739 1455463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34556 <nil> <nil>}
	I1002 22:15:07.769749 1455463 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:15:07.770492 1455463 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:15:10.925909 1455463 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-230628
	
	I1002 22:15:10.925932 1455463 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-230628"
	I1002 22:15:10.925997 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:10.943867 1455463 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:10.944194 1455463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34556 <nil> <nil>}
	I1002 22:15:10.944211 1455463 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-230628 && echo "default-k8s-diff-port-230628" | sudo tee /etc/hostname
	I1002 22:15:11.088301 1455463 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-230628
	
	I1002 22:15:11.088381 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:11.106567 1455463 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:11.106886 1455463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34556 <nil> <nil>}
	I1002 22:15:11.106909 1455463 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-230628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-230628/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-230628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:15:11.242963 1455463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:15:11.243039 1455463 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:15:11.243074 1455463 ubuntu.go:190] setting up certificates
	I1002 22:15:11.243129 1455463 provision.go:84] configureAuth start
	I1002 22:15:11.243228 1455463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:15:11.260202 1455463 provision.go:143] copyHostCerts
	I1002 22:15:11.260273 1455463 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:15:11.260293 1455463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:15:11.260371 1455463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:15:11.260472 1455463 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:15:11.260477 1455463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:15:11.260503 1455463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:15:11.260565 1455463 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:15:11.260570 1455463 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:15:11.260593 1455463 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:15:11.260652 1455463 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-230628 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-230628 localhost minikube]
	I1002 22:15:12.068250 1455463 provision.go:177] copyRemoteCerts
	I1002 22:15:12.068318 1455463 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:15:12.068375 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.086255 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:12.182260 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:15:12.200488 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 22:15:12.218985 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:15:12.237061 1455463 provision.go:87] duration metric: took 993.889939ms to configureAuth
	I1002 22:15:12.237129 1455463 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:15:12.237344 1455463 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:15:12.237458 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.254167 1455463 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:12.254483 1455463 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34556 <nil> <nil>}
	I1002 22:15:12.254502 1455463 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:15:12.497914 1455463 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:15:12.497940 1455463 machine.go:96] duration metric: took 4.750290631s to provisionDockerMachine
	I1002 22:15:12.497951 1455463 client.go:171] duration metric: took 10.629091398s to LocalClient.Create
	I1002 22:15:12.497966 1455463 start.go:167] duration metric: took 10.62916374s to libmachine.API.Create "default-k8s-diff-port-230628"
	I1002 22:15:12.497975 1455463 start.go:293] postStartSetup for "default-k8s-diff-port-230628" (driver="docker")
	I1002 22:15:12.497989 1455463 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:15:12.498084 1455463 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:15:12.498137 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.515393 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:12.610343 1455463 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:15:12.613767 1455463 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:15:12.613795 1455463 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:15:12.613815 1455463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:15:12.613872 1455463 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:15:12.613952 1455463 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:15:12.614090 1455463 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:15:12.621929 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:15:12.640610 1455463 start.go:296] duration metric: took 142.615794ms for postStartSetup
	I1002 22:15:12.641001 1455463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:15:12.659652 1455463 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/config.json ...
	I1002 22:15:12.659959 1455463 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:15:12.660012 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.680558 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:12.775490 1455463 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:15:12.780424 1455463 start.go:128] duration metric: took 10.915510732s to createHost
	I1002 22:15:12.780450 1455463 start.go:83] releasing machines lock for "default-k8s-diff-port-230628", held for 10.915642553s
	I1002 22:15:12.780526 1455463 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:15:12.797414 1455463 ssh_runner.go:195] Run: cat /version.json
	I1002 22:15:12.797430 1455463 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:15:12.797474 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.797496 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:12.822400 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:12.825524 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:13.009621 1455463 ssh_runner.go:195] Run: systemctl --version
	I1002 22:15:13.016613 1455463 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:15:13.055252 1455463 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:15:13.059824 1455463 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:15:13.059900 1455463 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:15:13.090824 1455463 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 22:15:13.090846 1455463 start.go:495] detecting cgroup driver to use...
	I1002 22:15:13.090879 1455463 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:15:13.090938 1455463 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:15:13.110298 1455463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:15:13.123531 1455463 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:15:13.123627 1455463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:15:13.145259 1455463 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:15:13.164304 1455463 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:15:13.279567 1455463 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:15:13.432888 1455463 docker.go:234] disabling docker service ...
	I1002 22:15:13.432977 1455463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:15:13.455287 1455463 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:15:13.474860 1455463 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:15:13.625471 1455463 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:15:13.786050 1455463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:15:13.799743 1455463 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:15:13.814598 1455463 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:15:13.814665 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.825426 1455463 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:15:13.825496 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.850635 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.866532 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.878588 1455463 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:15:13.892067 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.901540 1455463 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.917613 1455463 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:13.928607 1455463 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:15:13.937474 1455463 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:15:13.946013 1455463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:14.098936 1455463 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:15:14.269991 1455463 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:15:14.270141 1455463 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:15:14.275092 1455463 start.go:563] Will wait 60s for crictl version
	I1002 22:15:14.275151 1455463 ssh_runner.go:195] Run: which crictl
	I1002 22:15:14.279338 1455463 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:15:14.304676 1455463 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:15:14.304759 1455463 ssh_runner.go:195] Run: crio --version
	I1002 22:15:14.344170 1455463 ssh_runner.go:195] Run: crio --version
	I1002 22:15:14.384479 1455463 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:15:09.968429 1456658 out.go:252] * Restarting existing docker container for "old-k8s-version-173127" ...
	I1002 22:15:09.968523 1456658 cli_runner.go:164] Run: docker start old-k8s-version-173127
	I1002 22:15:10.246460 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:10.272716 1456658 kic.go:430] container "old-k8s-version-173127" state is running.
	I1002 22:15:10.273106 1456658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-173127
	I1002 22:15:10.299073 1456658 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/config.json ...
	I1002 22:15:10.299329 1456658 machine.go:93] provisionDockerMachine start ...
	I1002 22:15:10.299407 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:10.320652 1456658 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:10.320990 1456658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34561 <nil> <nil>}
	I1002 22:15:10.321006 1456658 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:15:10.321620 1456658 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55830->127.0.0.1:34561: read: connection reset by peer
	I1002 22:15:13.469932 1456658 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-173127
	
	I1002 22:15:13.469958 1456658 ubuntu.go:182] provisioning hostname "old-k8s-version-173127"
	I1002 22:15:13.470142 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:13.491674 1456658 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:13.491994 1456658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34561 <nil> <nil>}
	I1002 22:15:13.492083 1456658 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-173127 && echo "old-k8s-version-173127" | sudo tee /etc/hostname
	I1002 22:15:13.648844 1456658 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-173127
	
	I1002 22:15:13.648964 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:13.672044 1456658 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:13.672358 1456658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34561 <nil> <nil>}
	I1002 22:15:13.672381 1456658 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-173127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-173127/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-173127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:15:13.827255 1456658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:15:13.827283 1456658 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:15:13.827339 1456658 ubuntu.go:190] setting up certificates
	I1002 22:15:13.827366 1456658 provision.go:84] configureAuth start
	I1002 22:15:13.827448 1456658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-173127
	I1002 22:15:13.844962 1456658 provision.go:143] copyHostCerts
	I1002 22:15:13.845024 1456658 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:15:13.845048 1456658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:15:13.845114 1456658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:15:13.845228 1456658 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:15:13.845239 1456658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:15:13.845262 1456658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:15:13.845330 1456658 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:15:13.845341 1456658 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:15:13.845361 1456658 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:15:13.845422 1456658 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-173127 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-173127]
	I1002 22:15:14.419429 1456658 provision.go:177] copyRemoteCerts
	I1002 22:15:14.419481 1456658 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:15:14.419531 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:14.437746 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:14.538915 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:15:14.562194 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1002 22:15:14.582456 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:15:14.603161 1456658 provision.go:87] duration metric: took 775.779832ms to configureAuth
	I1002 22:15:14.603188 1456658 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:15:14.603388 1456658 config.go:182] Loaded profile config "old-k8s-version-173127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 22:15:14.603509 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:14.621220 1456658 main.go:141] libmachine: Using SSH client type: native
	I1002 22:15:14.621533 1456658 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34561 <nil> <nil>}
	I1002 22:15:14.621558 1456658 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:15:14.387233 1455463 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-230628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:15:14.414878 1455463 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:15:14.419232 1455463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:15:14.433076 1455463 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:15:14.433193 1455463 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:15:14.433245 1455463 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:15:14.480592 1455463 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:15:14.480613 1455463 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:15:14.480675 1455463 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:15:14.507148 1455463 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:15:14.507169 1455463 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:15:14.507176 1455463 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1002 22:15:14.507273 1455463 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-230628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:15:14.507351 1455463 ssh_runner.go:195] Run: crio config
	I1002 22:15:14.570454 1455463 cni.go:84] Creating CNI manager for ""
	I1002 22:15:14.570523 1455463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:15:14.570568 1455463 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:15:14.570613 1455463 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-230628 NodeName:default-k8s-diff-port-230628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:15:14.570803 1455463 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-230628"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:15:14.570905 1455463 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:15:14.579516 1455463 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:15:14.579645 1455463 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:15:14.588608 1455463 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 22:15:14.604213 1455463 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:15:14.623396 1455463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1002 22:15:14.643290 1455463 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:15:14.647832 1455463 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:15:14.663268 1455463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:14.801476 1455463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:15:14.818655 1455463 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628 for IP: 192.168.76.2
	I1002 22:15:14.818677 1455463 certs.go:195] generating shared ca certs ...
	I1002 22:15:14.818694 1455463 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:14.818828 1455463 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:15:14.818875 1455463 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:15:14.818886 1455463 certs.go:257] generating profile certs ...
	I1002 22:15:14.818939 1455463 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.key
	I1002 22:15:14.818973 1455463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt with IP's: []
	I1002 22:15:15.049422 1455463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt ...
	I1002 22:15:15.049460 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: {Name:mkcd6a24c9ed73d5db5aef11a5b181c8bdb7fff6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.049725 1455463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.key ...
	I1002 22:15:15.049738 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.key: {Name:mk69e8d972e013fd5f5d9119b3148edc028c6525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.049859 1455463 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595
	I1002 22:15:15.049874 1455463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt.2d20e595 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 22:15:15.444096 1455463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt.2d20e595 ...
	I1002 22:15:15.444130 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt.2d20e595: {Name:mkf57e1dd446764561fa06e2b021ffad070e7caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.444358 1455463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595 ...
	I1002 22:15:15.444375 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595: {Name:mka3a8ca78d14dabfb8b92a83293d5011419d567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.444505 1455463 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt.2d20e595 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt
	I1002 22:15:15.444620 1455463 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key
	I1002 22:15:15.444706 1455463 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key
	I1002 22:15:15.444739 1455463 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt with IP's: []
	I1002 22:15:15.854660 1455463 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt ...
	I1002 22:15:15.854716 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt: {Name:mkbf49ab5c3ae2be2607144f477252d578f89572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.854940 1455463 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key ...
	I1002 22:15:15.854981 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key: {Name:mk6dd4edfb6b77de253c254c37adf457d0300324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:15.855221 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:15:15.855293 1455463 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:15:15.855331 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:15:15.855379 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:15:15.855432 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:15:15.855479 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:15:15.855568 1455463 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:15:15.856166 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:15:15.873716 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:15:15.891221 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:15:15.908664 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:15:15.930003 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 22:15:15.951582 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:15:15.980748 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:15:16.006070 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:15:16.027432 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:15:16.047890 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:15:16.069043 1455463 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:15:16.090342 1455463 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:15:16.105376 1455463 ssh_runner.go:195] Run: openssl version
	I1002 22:15:16.112514 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:15:16.121282 1455463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:15:16.125084 1455463 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:15:16.125202 1455463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:15:16.185914 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:15:16.196369 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:15:16.208967 1455463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:15:16.213112 1455463 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:15:16.213208 1455463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:15:16.258221 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:15:16.266978 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:15:16.275632 1455463 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:16.279693 1455463 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:16.279803 1455463 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:16.340819 1455463 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:15:16.349418 1455463 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:15:16.353975 1455463 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 22:15:16.354080 1455463 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:15:16.354247 1455463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:15:16.354345 1455463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:15:16.406601 1455463 cri.go:89] found id: ""
	I1002 22:15:16.406757 1455463 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:15:16.423876 1455463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:15:16.438067 1455463 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:15:16.438182 1455463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:15:16.466543 1455463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:15:16.466620 1455463 kubeadm.go:157] found existing configuration files:
	
	I1002 22:15:16.466711 1455463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1002 22:15:16.477683 1455463 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:15:16.477816 1455463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:15:16.492575 1455463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1002 22:15:16.503647 1455463 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:15:16.503710 1455463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:15:16.511975 1455463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1002 22:15:16.521853 1455463 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:15:16.521916 1455463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:15:16.530398 1455463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1002 22:15:16.540153 1455463 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:15:16.540216 1455463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:15:16.548606 1455463 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:15:16.596774 1455463 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:15:16.599265 1455463 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:15:16.646194 1455463 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:15:16.646272 1455463 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:15:16.646313 1455463 kubeadm.go:318] OS: Linux
	I1002 22:15:16.646366 1455463 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:15:16.646421 1455463 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:15:16.646474 1455463 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:15:16.646528 1455463 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:15:16.646582 1455463 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:15:16.646635 1455463 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:15:16.646686 1455463 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:15:16.646740 1455463 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:15:16.646798 1455463 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:15:16.733983 1455463 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:15:16.734149 1455463 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:15:16.734254 1455463 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:15:16.746959 1455463 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:15:15.001482 1456658 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:15:15.001516 1456658 machine.go:96] duration metric: took 4.702169103s to provisionDockerMachine
	I1002 22:15:15.001529 1456658 start.go:293] postStartSetup for "old-k8s-version-173127" (driver="docker")
	I1002 22:15:15.001541 1456658 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:15:15.001608 1456658 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:15:15.001696 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:15.043856 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:15.155245 1456658 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:15:15.159168 1456658 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:15:15.159196 1456658 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:15:15.159215 1456658 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:15:15.159293 1456658 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:15:15.159377 1456658 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:15:15.159524 1456658 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:15:15.168267 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:15:15.190661 1456658 start.go:296] duration metric: took 189.115006ms for postStartSetup
	I1002 22:15:15.190759 1456658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:15:15.190805 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:15.211353 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:15.304817 1456658 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:15:15.310595 1456658 fix.go:56] duration metric: took 5.362672893s for fixHost
	I1002 22:15:15.310626 1456658 start.go:83] releasing machines lock for "old-k8s-version-173127", held for 5.362729663s
	I1002 22:15:15.310710 1456658 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-173127
	I1002 22:15:15.328902 1456658 ssh_runner.go:195] Run: cat /version.json
	I1002 22:15:15.328957 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:15.329205 1456658 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:15:15.329266 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:15.355700 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:15.380990 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:15.573717 1456658 ssh_runner.go:195] Run: systemctl --version
	I1002 22:15:15.580422 1456658 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:15:15.628263 1456658 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:15:15.633019 1456658 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:15:15.633084 1456658 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:15:15.641351 1456658 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:15:15.641374 1456658 start.go:495] detecting cgroup driver to use...
	I1002 22:15:15.641405 1456658 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:15:15.641454 1456658 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:15:15.667836 1456658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:15:15.715390 1456658 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:15:15.715452 1456658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:15:15.734266 1456658 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:15:15.760995 1456658 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:15:15.913506 1456658 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:15:16.063210 1456658 docker.go:234] disabling docker service ...
	I1002 22:15:16.063271 1456658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:15:16.081226 1456658 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:15:16.096466 1456658 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:15:16.243957 1456658 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:15:16.398137 1456658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:15:16.414387 1456658 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:15:16.433693 1456658 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 22:15:16.433752 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.448711 1456658 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:15:16.448779 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.462090 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.479047 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.489883 1456658 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:15:16.501682 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.512488 1456658 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.522780 1456658 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:15:16.532621 1456658 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:15:16.542899 1456658 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:15:16.551698 1456658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:16.696543 1456658 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:15:16.865149 1456658 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:15:16.865215 1456658 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:15:16.869292 1456658 start.go:563] Will wait 60s for crictl version
	I1002 22:15:16.869351 1456658 ssh_runner.go:195] Run: which crictl
	I1002 22:15:16.873728 1456658 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:15:16.917322 1456658 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:15:16.917489 1456658 ssh_runner.go:195] Run: crio --version
	I1002 22:15:16.963052 1456658 ssh_runner.go:195] Run: crio --version
	I1002 22:15:17.016048 1456658 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1002 22:15:17.019124 1456658 cli_runner.go:164] Run: docker network inspect old-k8s-version-173127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:15:17.043687 1456658 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 22:15:17.047984 1456658 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:15:17.059872 1456658 kubeadm.go:883] updating cluster {Name:old-k8s-version-173127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-173127 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:15:17.059989 1456658 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 22:15:17.060052 1456658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:15:17.094405 1456658 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:15:17.094429 1456658 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:15:17.094486 1456658 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:15:17.127042 1456658 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:15:17.127067 1456658 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:15:17.127075 1456658 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1002 22:15:17.127200 1456658 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-173127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-173127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:15:17.127286 1456658 ssh_runner.go:195] Run: crio config
	I1002 22:15:17.215630 1456658 cni.go:84] Creating CNI manager for ""
	I1002 22:15:17.215654 1456658 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:15:17.215673 1456658 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:15:17.215695 1456658 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-173127 NodeName:old-k8s-version-173127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:15:17.215844 1456658 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-173127"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:15:17.215915 1456658 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1002 22:15:17.227126 1456658 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:15:17.227289 1456658 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:15:17.239789 1456658 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1002 22:15:17.253773 1456658 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:15:17.266833 1456658 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1002 22:15:17.279855 1456658 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:15:17.284090 1456658 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:15:17.293848 1456658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:17.441102 1456658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:15:17.457625 1456658 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127 for IP: 192.168.85.2
	I1002 22:15:17.457698 1456658 certs.go:195] generating shared ca certs ...
	I1002 22:15:17.457729 1456658 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:17.457910 1456658 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:15:17.458009 1456658 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:15:17.458051 1456658 certs.go:257] generating profile certs ...
	I1002 22:15:17.458187 1456658 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.key
	I1002 22:15:17.458310 1456658 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/apiserver.key.3d23cd3a
	I1002 22:15:17.458387 1456658 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/proxy-client.key
	I1002 22:15:17.458555 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:15:17.458622 1456658 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:15:17.458651 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:15:17.458711 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:15:17.458758 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:15:17.458812 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:15:17.458899 1456658 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:15:17.459742 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:15:17.515829 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:15:17.567290 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:15:17.632093 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:15:17.684686 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1002 22:15:17.735109 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:15:17.773372 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:15:17.797453 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 22:15:17.821531 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:15:17.841697 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:15:17.862435 1456658 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:15:17.882633 1456658 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:15:17.907254 1456658 ssh_runner.go:195] Run: openssl version
	I1002 22:15:17.914345 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:15:17.923702 1456658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:15:17.928435 1456658 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:15:17.928500 1456658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:15:17.982813 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:15:17.991614 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:15:18.003296 1456658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:18.009381 1456658 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:18.009453 1456658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:15:18.057989 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:15:18.067698 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:15:18.077514 1456658 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:15:18.082230 1456658 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:15:18.082309 1456658 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:15:18.126055 1456658 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:15:18.135258 1456658 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:15:18.139930 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:15:18.181618 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:15:18.266805 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:15:18.371213 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:15:18.445762 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:15:18.691987 1456658 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:15:18.796815 1456658 kubeadm.go:400] StartCluster: {Name:old-k8s-version-173127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-173127 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:15:18.796909 1456658 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:15:18.796989 1456658 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:15:18.962574 1456658 cri.go:89] found id: "127dfbf0ed811f4c5b76702b20dd8c06a78db529e7c9966d4050c03afa84f582"
	I1002 22:15:18.962598 1456658 cri.go:89] found id: "9248a46e17c68748b0f35e7cdc9ae68afe7904438ef405050db309d6ad51f90b"
	I1002 22:15:18.962603 1456658 cri.go:89] found id: "b311e5165e9b94b09ec1ae5755a3fa63d5356ede8a5b8e26c85b073fad8c5351"
	I1002 22:15:18.962609 1456658 cri.go:89] found id: "ce52286ca959aa1094c5360e368796a7e4a0f842b8a4a79f996f8d6b468e669c"
	I1002 22:15:18.962612 1456658 cri.go:89] found id: ""
	I1002 22:15:18.962661 1456658 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:15:19.011496 1456658 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:15:19Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:15:19.011615 1456658 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:15:19.036559 1456658 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:15:19.036579 1456658 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:15:19.036639 1456658 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:15:19.068243 1456658 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:15:19.068647 1456658 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-173127" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:15:19.068775 1456658 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-173127" cluster setting kubeconfig missing "old-k8s-version-173127" context setting]
	I1002 22:15:19.069067 1456658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:19.070540 1456658 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:15:19.092524 1456658 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 22:15:19.092557 1456658 kubeadm.go:601] duration metric: took 55.972415ms to restartPrimaryControlPlane
	I1002 22:15:19.092567 1456658 kubeadm.go:402] duration metric: took 295.761664ms to StartCluster
	I1002 22:15:19.092582 1456658 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:19.092644 1456658 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:15:19.093259 1456658 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:19.093478 1456658 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:15:19.093775 1456658 config.go:182] Loaded profile config "old-k8s-version-173127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1002 22:15:19.093822 1456658 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:15:19.093889 1456658 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-173127"
	I1002 22:15:19.093910 1456658 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-173127"
	W1002 22:15:19.093983 1456658 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:15:19.094008 1456658 host.go:66] Checking if "old-k8s-version-173127" exists ...
	I1002 22:15:19.093939 1456658 addons.go:69] Setting dashboard=true in profile "old-k8s-version-173127"
	I1002 22:15:19.094056 1456658 addons.go:238] Setting addon dashboard=true in "old-k8s-version-173127"
	W1002 22:15:19.094062 1456658 addons.go:247] addon dashboard should already be in state true
	I1002 22:15:19.094086 1456658 host.go:66] Checking if "old-k8s-version-173127" exists ...
	I1002 22:15:19.094681 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:19.093947 1456658 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-173127"
	I1002 22:15:19.095168 1456658 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-173127"
	I1002 22:15:19.095408 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:19.095799 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:19.099500 1456658 out.go:179] * Verifying Kubernetes components...
	I1002 22:15:19.106123 1456658 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:19.153626 1456658 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:15:19.153805 1456658 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:15:19.154856 1456658 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-173127"
	W1002 22:15:19.154876 1456658 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:15:19.154899 1456658 host.go:66] Checking if "old-k8s-version-173127" exists ...
	I1002 22:15:19.155312 1456658 cli_runner.go:164] Run: docker container inspect old-k8s-version-173127 --format={{.State.Status}}
	I1002 22:15:19.157571 1456658 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:15:19.157621 1456658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:15:19.157686 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:19.160443 1456658 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 22:15:19.163386 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:15:19.163408 1456658 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:15:19.163483 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:19.198187 1456658 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:15:19.198208 1456658 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:15:19.198271 1456658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-173127
	I1002 22:15:19.222171 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:19.228269 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:19.236013 1456658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34561 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/old-k8s-version-173127/id_rsa Username:docker}
	I1002 22:15:19.535388 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:15:19.535412 1456658 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:15:19.603764 1456658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:15:19.620433 1456658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:15:19.652063 1456658 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:15:16.750286 1455463 out.go:252]   - Generating certificates and keys ...
	I1002 22:15:16.750388 1455463 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:15:16.750467 1455463 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:15:16.925038 1455463 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 22:15:18.243893 1455463 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 22:15:18.407058 1455463 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 22:15:18.978675 1455463 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 22:15:20.385527 1455463 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 22:15:20.385861 1455463 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-230628 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:15:20.678554 1455463 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 22:15:20.679137 1455463 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-230628 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:15:21.116855 1455463 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 22:15:21.544475 1455463 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 22:15:19.771580 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:15:19.771606 1456658 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:15:19.928346 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:15:19.928373 1456658 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:15:20.056457 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:15:20.056527 1456658 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:15:20.161666 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:15:20.161737 1456658 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:15:20.211551 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:15:20.211623 1456658 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:15:20.252349 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:15:20.252420 1456658 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:15:20.288346 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:15:20.288414 1456658 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:15:20.319345 1456658 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:15:20.319409 1456658 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:15:20.363121 1456658 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:15:22.347270 1455463 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:15:22.347820 1455463 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:15:22.755674 1455463 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:15:23.442143 1455463 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:15:23.652976 1455463 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:15:24.449573 1455463 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:15:25.364671 1455463 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:15:25.365816 1455463 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:15:25.368956 1455463 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:15:25.372471 1455463 out.go:252]   - Booting up control plane ...
	I1002 22:15:25.372585 1455463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:15:25.377080 1455463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:15:25.380676 1455463 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:15:25.411081 1455463 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:15:25.411193 1455463 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:15:25.425436 1455463 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:15:25.425537 1455463 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:15:25.425584 1455463 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:15:25.666465 1455463 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:15:25.666590 1455463 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:15:30.716345 1456658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.095858218s)
	I1002 22:15:30.716400 1456658 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.064306851s)
	I1002 22:15:30.716533 1456658 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-173127" to be "Ready" ...
	I1002 22:15:30.717875 1456658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.114068825s)
	I1002 22:15:30.762652 1456658 node_ready.go:49] node "old-k8s-version-173127" is "Ready"
	I1002 22:15:30.762677 1456658 node_ready.go:38] duration metric: took 46.035769ms for node "old-k8s-version-173127" to be "Ready" ...
	I1002 22:15:30.762689 1456658 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:15:30.762747 1456658 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:15:31.535052 1456658 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.171845805s)
	I1002 22:15:31.535148 1456658 api_server.go:72] duration metric: took 12.44163991s to wait for apiserver process to appear ...
	I1002 22:15:31.535353 1456658 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:15:31.535374 1456658 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:15:31.537883 1456658 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-173127 addons enable metrics-server
	
	I1002 22:15:31.540930 1456658 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 22:15:28.154992 1455463 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.501656674s
	I1002 22:15:28.158423 1455463 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:15:28.158735 1455463 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1002 22:15:28.159028 1455463 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:15:28.159968 1455463 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:15:31.544802 1456658 addons.go:514] duration metric: took 12.450979691s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 22:15:31.550933 1456658 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:15:31.552374 1456658 api_server.go:141] control plane version: v1.28.0
	I1002 22:15:31.552395 1456658 api_server.go:131] duration metric: took 17.034298ms to wait for apiserver health ...
	I1002 22:15:31.552404 1456658 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:15:31.556113 1456658 system_pods.go:59] 8 kube-system pods found
	I1002 22:15:31.556206 1456658 system_pods.go:61] "coredns-5dd5756b68-78sbd" [f75699fd-ea3a-48d9-8ed2-5b44e003cb58] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:15:31.556228 1456658 system_pods.go:61] "etcd-old-k8s-version-173127" [225299ab-3de1-4fd9-8ada-c55b33128e67] Running
	I1002 22:15:31.556263 1456658 system_pods.go:61] "kindnet-xtlhd" [e972a3fc-03ef-437a-a0d6-3f7337f3a2e7] Running
	I1002 22:15:31.556289 1456658 system_pods.go:61] "kube-apiserver-old-k8s-version-173127" [6594b352-cc57-4883-8a6c-85b0e820eac2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:15:31.556311 1456658 system_pods.go:61] "kube-controller-manager-old-k8s-version-173127" [590d5ae8-5a2b-4db5-9da2-75793a01dd30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:15:31.556349 1456658 system_pods.go:61] "kube-proxy-86prs" [b1d34de9-8156-4726-b410-78c5d3ca9beb] Running
	I1002 22:15:31.556376 1456658 system_pods.go:61] "kube-scheduler-old-k8s-version-173127" [4e4be49a-3b08-4188-9b69-9f0618db154c] Running
	I1002 22:15:31.556398 1456658 system_pods.go:61] "storage-provisioner" [4434574b-8c2c-4a2a-b3c5-60122fa77e43] Running
	I1002 22:15:31.556431 1456658 system_pods.go:74] duration metric: took 4.020425ms to wait for pod list to return data ...
	I1002 22:15:31.556460 1456658 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:15:31.559346 1456658 default_sa.go:45] found service account: "default"
	I1002 22:15:31.559407 1456658 default_sa.go:55] duration metric: took 2.925716ms for default service account to be created ...
	I1002 22:15:31.559447 1456658 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:15:31.565947 1456658 system_pods.go:86] 8 kube-system pods found
	I1002 22:15:31.566021 1456658 system_pods.go:89] "coredns-5dd5756b68-78sbd" [f75699fd-ea3a-48d9-8ed2-5b44e003cb58] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:15:31.566062 1456658 system_pods.go:89] "etcd-old-k8s-version-173127" [225299ab-3de1-4fd9-8ada-c55b33128e67] Running
	I1002 22:15:31.566086 1456658 system_pods.go:89] "kindnet-xtlhd" [e972a3fc-03ef-437a-a0d6-3f7337f3a2e7] Running
	I1002 22:15:31.566107 1456658 system_pods.go:89] "kube-apiserver-old-k8s-version-173127" [6594b352-cc57-4883-8a6c-85b0e820eac2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:15:31.566150 1456658 system_pods.go:89] "kube-controller-manager-old-k8s-version-173127" [590d5ae8-5a2b-4db5-9da2-75793a01dd30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:15:31.566176 1456658 system_pods.go:89] "kube-proxy-86prs" [b1d34de9-8156-4726-b410-78c5d3ca9beb] Running
	I1002 22:15:31.566196 1456658 system_pods.go:89] "kube-scheduler-old-k8s-version-173127" [4e4be49a-3b08-4188-9b69-9f0618db154c] Running
	I1002 22:15:31.566228 1456658 system_pods.go:89] "storage-provisioner" [4434574b-8c2c-4a2a-b3c5-60122fa77e43] Running
	I1002 22:15:31.566253 1456658 system_pods.go:126] duration metric: took 6.784848ms to wait for k8s-apps to be running ...
	I1002 22:15:31.566274 1456658 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:15:31.566358 1456658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:15:31.588622 1456658 system_svc.go:56] duration metric: took 22.338188ms WaitForService to wait for kubelet
	I1002 22:15:31.588650 1456658 kubeadm.go:586] duration metric: took 12.495141254s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:15:31.588668 1456658 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:15:31.594359 1456658 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:15:31.594441 1456658 node_conditions.go:123] node cpu capacity is 2
	I1002 22:15:31.594468 1456658 node_conditions.go:105] duration metric: took 5.793489ms to run NodePressure ...
	I1002 22:15:31.594493 1456658 start.go:241] waiting for startup goroutines ...
	I1002 22:15:31.594526 1456658 start.go:246] waiting for cluster config update ...
	I1002 22:15:31.594555 1456658 start.go:255] writing updated cluster config ...
	I1002 22:15:31.594890 1456658 ssh_runner.go:195] Run: rm -f paused
	I1002 22:15:31.599261 1456658 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:15:31.607281 1456658 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-78sbd" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 22:15:33.614942 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	I1002 22:15:32.746490 1455463 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.586025458s
	I1002 22:15:34.810074 1455463 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.649351715s
	I1002 22:15:36.661155 1455463 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.501736021s
	I1002 22:15:36.681682 1455463 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 22:15:36.698789 1455463 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 22:15:36.715122 1455463 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 22:15:36.715624 1455463 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-230628 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 22:15:36.738680 1455463 kubeadm.go:318] [bootstrap-token] Using token: rz2xxr.tp4rjlg1n4owddq6
	I1002 22:15:36.741583 1455463 out.go:252]   - Configuring RBAC rules ...
	I1002 22:15:36.741715 1455463 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 22:15:36.749548 1455463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 22:15:36.767244 1455463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 22:15:36.773322 1455463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 22:15:36.778338 1455463 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 22:15:36.785949 1455463 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 22:15:37.070380 1455463 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 22:15:37.505203 1455463 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 22:15:38.069819 1455463 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 22:15:38.071618 1455463 kubeadm.go:318] 
	I1002 22:15:38.071699 1455463 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 22:15:38.071705 1455463 kubeadm.go:318] 
	I1002 22:15:38.071786 1455463 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 22:15:38.071791 1455463 kubeadm.go:318] 
	I1002 22:15:38.071837 1455463 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 22:15:38.071908 1455463 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 22:15:38.071961 1455463 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 22:15:38.071966 1455463 kubeadm.go:318] 
	I1002 22:15:38.072029 1455463 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 22:15:38.072072 1455463 kubeadm.go:318] 
	I1002 22:15:38.072123 1455463 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 22:15:38.072127 1455463 kubeadm.go:318] 
	I1002 22:15:38.072183 1455463 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 22:15:38.072262 1455463 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 22:15:38.072333 1455463 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 22:15:38.072338 1455463 kubeadm.go:318] 
	I1002 22:15:38.072438 1455463 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 22:15:38.072519 1455463 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 22:15:38.072523 1455463 kubeadm.go:318] 
	I1002 22:15:38.072611 1455463 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token rz2xxr.tp4rjlg1n4owddq6 \
	I1002 22:15:38.072718 1455463 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 22:15:38.072740 1455463 kubeadm.go:318] 	--control-plane 
	I1002 22:15:38.072745 1455463 kubeadm.go:318] 
	I1002 22:15:38.072834 1455463 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 22:15:38.072839 1455463 kubeadm.go:318] 
	I1002 22:15:38.072924 1455463 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token rz2xxr.tp4rjlg1n4owddq6 \
	I1002 22:15:38.073031 1455463 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 22:15:38.077413 1455463 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:15:38.077657 1455463 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:15:38.077779 1455463 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:15:38.077805 1455463 cni.go:84] Creating CNI manager for ""
	I1002 22:15:38.077814 1455463 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:15:38.081103 1455463 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1002 22:15:36.115162 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:38.116012 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	I1002 22:15:38.084222 1455463 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:15:38.089365 1455463 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 22:15:38.089389 1455463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 22:15:38.107473 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:15:38.443975 1455463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:15:38.444088 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:38.444200 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-230628 minikube.k8s.io/updated_at=2025_10_02T22_15_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=default-k8s-diff-port-230628 minikube.k8s.io/primary=true
	I1002 22:15:38.604038 1455463 ops.go:34] apiserver oom_adj: -16
	I1002 22:15:38.604160 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:39.104899 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:39.604972 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:40.104297 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:40.604718 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:41.104668 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:41.604673 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:42.105255 1455463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:15:42.351543 1455463 kubeadm.go:1113] duration metric: took 3.907517019s to wait for elevateKubeSystemPrivileges
	I1002 22:15:42.351582 1455463 kubeadm.go:402] duration metric: took 25.997505097s to StartCluster
	I1002 22:15:42.351600 1455463 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:42.351669 1455463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:15:42.352836 1455463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:15:42.353074 1455463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:15:42.353167 1455463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:15:42.353418 1455463 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:15:42.353468 1455463 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:15:42.353543 1455463 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-230628"
	I1002 22:15:42.353556 1455463 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-230628"
	I1002 22:15:42.353586 1455463 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:15:42.354141 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:42.354493 1455463 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-230628"
	I1002 22:15:42.354514 1455463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-230628"
	I1002 22:15:42.354800 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:42.356478 1455463 out.go:179] * Verifying Kubernetes components...
	I1002 22:15:42.367009 1455463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:15:42.395857 1455463 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-230628"
	I1002 22:15:42.395898 1455463 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:15:42.398720 1455463 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:15:42.406763 1455463 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:15:42.409899 1455463 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:15:42.409921 1455463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:15:42.409983 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:42.458101 1455463 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:15:42.458124 1455463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:15:42.458199 1455463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:15:42.473455 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:42.504177 1455463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34556 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:15:42.977660 1455463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:15:43.069700 1455463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:15:43.130232 1455463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:15:43.130503 1455463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 22:15:44.633926 1455463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.503373132s)
	I1002 22:15:44.633958 1455463 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 22:15:44.635067 1455463 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.504760422s)
	I1002 22:15:44.635988 1455463 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-230628" to be "Ready" ...
	I1002 22:15:44.636417 1455463 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.566663708s)
	I1002 22:15:44.639701 1455463 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1002 22:15:40.614122 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:42.617854 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	I1002 22:15:44.643898 1455463 addons.go:514] duration metric: took 2.290407921s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1002 22:15:45.157399 1455463 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-230628" context rescaled to 1 replicas
	W1002 22:15:45.116177 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:47.616549 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:46.639665 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:15:49.141848 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:15:51.143818 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:15:50.113642 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:52.114300 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:54.612839 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:53.639377 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:15:56.139389 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:15:56.613750 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:58.613901 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:15:58.139877 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:00.195160 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:00.614601 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:16:03.114600 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:16:02.639680 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:05.139702 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:05.613081 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	W1002 22:16:07.615019 1456658 pod_ready.go:104] pod "coredns-5dd5756b68-78sbd" is not "Ready", error: <nil>
	I1002 22:16:08.113225 1456658 pod_ready.go:94] pod "coredns-5dd5756b68-78sbd" is "Ready"
	I1002 22:16:08.113257 1456658 pod_ready.go:86] duration metric: took 36.505894744s for pod "coredns-5dd5756b68-78sbd" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.116927 1456658 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.124639 1456658 pod_ready.go:94] pod "etcd-old-k8s-version-173127" is "Ready"
	I1002 22:16:08.124666 1456658 pod_ready.go:86] duration metric: took 7.714554ms for pod "etcd-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.128821 1456658 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.140885 1456658 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-173127" is "Ready"
	I1002 22:16:08.140916 1456658 pod_ready.go:86] duration metric: took 12.067994ms for pod "kube-apiserver-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.146101 1456658 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.311026 1456658 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-173127" is "Ready"
	I1002 22:16:08.311057 1456658 pod_ready.go:86] duration metric: took 164.927706ms for pod "kube-controller-manager-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.512074 1456658 pod_ready.go:83] waiting for pod "kube-proxy-86prs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:08.911724 1456658 pod_ready.go:94] pod "kube-proxy-86prs" is "Ready"
	I1002 22:16:08.911751 1456658 pod_ready.go:86] duration metric: took 399.651667ms for pod "kube-proxy-86prs" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:09.111723 1456658 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:09.511330 1456658 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-173127" is "Ready"
	I1002 22:16:09.511355 1456658 pod_ready.go:86] duration metric: took 399.554306ms for pod "kube-scheduler-old-k8s-version-173127" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:09.511367 1456658 pod_ready.go:40] duration metric: took 37.912028834s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:16:09.571232 1456658 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1002 22:16:09.574565 1456658 out.go:203] 
	W1002 22:16:09.577504 1456658 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1002 22:16:09.580525 1456658 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1002 22:16:09.583501 1456658 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-173127" cluster and "default" namespace by default
	W1002 22:16:07.140175 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:09.141142 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:11.639989 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:14.140227 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:16.140374 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:18.638802 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:20.639073 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	W1002 22:16:22.639853 1455463 node_ready.go:57] node "default-k8s-diff-port-230628" has "Ready":"False" status (will retry)
	I1002 22:16:24.640229 1455463 node_ready.go:49] node "default-k8s-diff-port-230628" is "Ready"
	I1002 22:16:24.640256 1455463 node_ready.go:38] duration metric: took 40.004238996s for node "default-k8s-diff-port-230628" to be "Ready" ...
	I1002 22:16:24.640270 1455463 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:16:24.640323 1455463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:16:24.660317 1455463 api_server.go:72] duration metric: took 42.307209613s to wait for apiserver process to appear ...
	I1002 22:16:24.660339 1455463 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:16:24.660357 1455463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1002 22:16:24.683380 1455463 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1002 22:16:24.689526 1455463 api_server.go:141] control plane version: v1.34.1
	I1002 22:16:24.689555 1455463 api_server.go:131] duration metric: took 29.210581ms to wait for apiserver health ...
	I1002 22:16:24.689633 1455463 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:16:24.694703 1455463 system_pods.go:59] 8 kube-system pods found
	I1002 22:16:24.694737 1455463 system_pods.go:61] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:16:24.694744 1455463 system_pods.go:61] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running
	I1002 22:16:24.694750 1455463 system_pods.go:61] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:16:24.694755 1455463 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running
	I1002 22:16:24.694760 1455463 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:16:24.694764 1455463 system_pods.go:61] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:16:24.694768 1455463 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running
	I1002 22:16:24.694774 1455463 system_pods.go:61] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:16:24.694779 1455463 system_pods.go:74] duration metric: took 5.137803ms to wait for pod list to return data ...
	I1002 22:16:24.694787 1455463 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:16:24.702773 1455463 default_sa.go:45] found service account: "default"
	I1002 22:16:24.702804 1455463 default_sa.go:55] duration metric: took 8.010794ms for default service account to be created ...
	I1002 22:16:24.702815 1455463 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:16:24.708354 1455463 system_pods.go:86] 8 kube-system pods found
	I1002 22:16:24.708384 1455463 system_pods.go:89] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:16:24.708393 1455463 system_pods.go:89] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running
	I1002 22:16:24.708400 1455463 system_pods.go:89] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:16:24.708405 1455463 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running
	I1002 22:16:24.708409 1455463 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:16:24.708413 1455463 system_pods.go:89] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:16:24.708445 1455463 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running
	I1002 22:16:24.708454 1455463 system_pods.go:89] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:16:24.708474 1455463 retry.go:31] will retry after 275.990104ms: missing components: kube-dns
	I1002 22:16:24.990910 1455463 system_pods.go:86] 8 kube-system pods found
	I1002 22:16:24.990948 1455463 system_pods.go:89] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:16:24.990955 1455463 system_pods.go:89] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running
	I1002 22:16:24.990961 1455463 system_pods.go:89] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:16:24.990965 1455463 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running
	I1002 22:16:24.990969 1455463 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:16:24.990977 1455463 system_pods.go:89] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:16:24.990982 1455463 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running
	I1002 22:16:24.990987 1455463 system_pods.go:89] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:16:24.991002 1455463 retry.go:31] will retry after 362.541677ms: missing components: kube-dns
	I1002 22:16:25.361646 1455463 system_pods.go:86] 8 kube-system pods found
	I1002 22:16:25.361696 1455463 system_pods.go:89] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:16:25.361705 1455463 system_pods.go:89] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running
	I1002 22:16:25.361712 1455463 system_pods.go:89] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:16:25.361730 1455463 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running
	I1002 22:16:25.361746 1455463 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:16:25.361751 1455463 system_pods.go:89] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:16:25.361755 1455463 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running
	I1002 22:16:25.361765 1455463 system_pods.go:89] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:16:25.361810 1455463 retry.go:31] will retry after 416.319709ms: missing components: kube-dns
	I1002 22:16:25.793561 1455463 system_pods.go:86] 8 kube-system pods found
	I1002 22:16:25.793591 1455463 system_pods.go:89] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Running
	I1002 22:16:25.793597 1455463 system_pods.go:89] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running
	I1002 22:16:25.793603 1455463 system_pods.go:89] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:16:25.793607 1455463 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running
	I1002 22:16:25.793611 1455463 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:16:25.793615 1455463 system_pods.go:89] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:16:25.793619 1455463 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running
	I1002 22:16:25.793623 1455463 system_pods.go:89] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Running
	I1002 22:16:25.793630 1455463 system_pods.go:126] duration metric: took 1.090809719s to wait for k8s-apps to be running ...
	I1002 22:16:25.793637 1455463 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:16:25.793699 1455463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:16:25.817129 1455463 system_svc.go:56] duration metric: took 23.482609ms WaitForService to wait for kubelet
	I1002 22:16:25.817154 1455463 kubeadm.go:586] duration metric: took 43.464051345s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:16:25.817173 1455463 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:16:25.821320 1455463 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:16:25.821349 1455463 node_conditions.go:123] node cpu capacity is 2
	I1002 22:16:25.821361 1455463 node_conditions.go:105] duration metric: took 4.182793ms to run NodePressure ...
	I1002 22:16:25.821371 1455463 start.go:241] waiting for startup goroutines ...
	I1002 22:16:25.821379 1455463 start.go:246] waiting for cluster config update ...
	I1002 22:16:25.821390 1455463 start.go:255] writing updated cluster config ...
	I1002 22:16:25.821693 1455463 ssh_runner.go:195] Run: rm -f paused
	I1002 22:16:25.826651 1455463 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:16:25.830620 1455463 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jvqks" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:25.836375 1455463 pod_ready.go:94] pod "coredns-66bc5c9577-jvqks" is "Ready"
	I1002 22:16:25.836405 1455463 pod_ready.go:86] duration metric: took 5.756993ms for pod "coredns-66bc5c9577-jvqks" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:25.839601 1455463 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:25.847319 1455463 pod_ready.go:94] pod "etcd-default-k8s-diff-port-230628" is "Ready"
	I1002 22:16:25.847348 1455463 pod_ready.go:86] duration metric: took 7.705791ms for pod "etcd-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:25.853072 1455463 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:25.858437 1455463 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-230628" is "Ready"
	I1002 22:16:25.858543 1455463 pod_ready.go:86] duration metric: took 5.265818ms for pod "kube-apiserver-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:25.861055 1455463 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:26.231795 1455463 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-230628" is "Ready"
	I1002 22:16:26.231900 1455463 pod_ready.go:86] duration metric: took 370.778236ms for pod "kube-controller-manager-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:16:26.432991 1455463 pod_ready.go:83] waiting for pod "kube-proxy-4l9vx" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.790406326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.799196561Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.799989359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.815927185Z" level=info msg="Created container e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk/dashboard-metrics-scraper" id=8ea47ded-df5c-4dd1-a536-f3b5bec54e7d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.816767145Z" level=info msg="Starting container: e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d" id=a59cc8f1-888a-4630-adaa-a4b3d8937508 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:16:02 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:02.820188623Z" level=info msg="Started container" PID=1639 containerID=e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk/dashboard-metrics-scraper id=a59cc8f1-888a-4630-adaa-a4b3d8937508 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cc46f91e350134f8f72091c652ba9298c4550ce8137306b9ef70612e59ba96d2
	Oct 02 22:16:02 old-k8s-version-173127 conmon[1637]: conmon e878da3125cc7504cca8 <ninfo>: container 1639 exited with status 1
	Oct 02 22:16:03 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:03.116506443Z" level=info msg="Removing container: 01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e" id=b59f70b3-22dd-487f-9287-99e1f361e4ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:16:03 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:03.123793271Z" level=info msg="Error loading conmon cgroup of container 01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e: cgroup deleted" id=b59f70b3-22dd-487f-9287-99e1f361e4ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:16:03 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:03.12715569Z" level=info msg="Removed container 01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk/dashboard-metrics-scraper" id=b59f70b3-22dd-487f-9287-99e1f361e4ef name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.515177019Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.523105395Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.523145558Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.523177328Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.526413381Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.526448687Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.526470718Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.529840906Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.529879084Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.529902082Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.53378587Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.53382358Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.533845331Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.538082359Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:16:08 old-k8s-version-173127 crio[650]: time="2025-10-02T22:16:08.538120791Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e878da3125cc7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago       Exited              dashboard-metrics-scraper   2                   cc46f91e35013       dashboard-metrics-scraper-5f989dc9cf-zm7zk       kubernetes-dashboard
	262c0c70277ce       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   20e90aec323ec       storage-provisioner                              kube-system
	9c9d715891fb1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago       Running             kubernetes-dashboard        0                   5825d7223aec8       kubernetes-dashboard-8694d4445c-vlglj            kubernetes-dashboard
	4fd4342de7847       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           59 seconds ago       Running             coredns                     1                   656cff9e4f77d       coredns-5dd5756b68-78sbd                         kube-system
	973a833ef3a2e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   4f2eddc047af2       busybox                                          default
	084caaaf7525a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           59 seconds ago       Running             kube-proxy                  1                   7a4d1b38f17ac       kube-proxy-86prs                                 kube-system
	63792d5054495       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   20e90aec323ec       storage-provisioner                              kube-system
	85ae11f7046e7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   db121931dc350       kindnet-xtlhd                                    kube-system
	127dfbf0ed811       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   8e67e54766172       kube-controller-manager-old-k8s-version-173127   kube-system
	9248a46e17c68       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   2d1a5b594be82       etcd-old-k8s-version-173127                      kube-system
	b311e5165e9b9       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   5876288b79169       kube-scheduler-old-k8s-version-173127            kube-system
	ce52286ca959a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   4326f4b4097b9       kube-apiserver-old-k8s-version-173127            kube-system
	
	
	==> coredns [4fd4342de784742d74c842c78faaa45d24380c2b83bdf44da664a2c16a2289d7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53574 - 30820 "HINFO IN 4664996271170584488.1405436629196325381. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012015901s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-173127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-173127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=old-k8s-version-173127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_14_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-173127
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:16:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:16:18 +0000   Thu, 02 Oct 2025 22:14:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:16:18 +0000   Thu, 02 Oct 2025 22:14:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:16:18 +0000   Thu, 02 Oct 2025 22:14:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:16:18 +0000   Thu, 02 Oct 2025 22:14:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-173127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 f592df572e62434f8f004b57bfa02bb2
	  System UUID:                58645267-b6b4-4674-bdc1-8f78d84fc839
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-5dd5756b68-78sbd                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m1s
	  kube-system                 etcd-old-k8s-version-173127                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m13s
	  kube-system                 kindnet-xtlhd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m1s
	  kube-system                 kube-apiserver-old-k8s-version-173127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-old-k8s-version-173127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-proxy-86prs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-scheduler-old-k8s-version-173127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-zm7zk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-vlglj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 118s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m13s              kubelet          Node old-k8s-version-173127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s              kubelet          Node old-k8s-version-173127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s              kubelet          Node old-k8s-version-173127 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m13s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m2s               node-controller  Node old-k8s-version-173127 event: Registered Node old-k8s-version-173127 in Controller
	  Normal  NodeReady                106s               kubelet          Node old-k8s-version-173127 status is now: NodeReady
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node old-k8s-version-173127 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node old-k8s-version-173127 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node old-k8s-version-173127 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node old-k8s-version-173127 event: Registered Node old-k8s-version-173127 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9248a46e17c68748b0f35e7cdc9ae68afe7904438ef405050db309d6ad51f90b] <==
	{"level":"info","ts":"2025-10-02T22:15:18.822192Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T22:15:18.8222Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-02T22:15:18.822517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-02T22:15:18.822569Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-02T22:15:18.833052Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-02T22:15:18.843586Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T22:15:18.84364Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-02T22:15:18.845356Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-02T22:15:18.845379Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-02T22:15:18.87724Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-02T22:15:18.881582Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-02T22:15:20.566083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-02T22:15:20.566197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-02T22:15:20.566245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-02T22:15:20.566284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-02T22:15:20.566318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-02T22:15:20.566357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-02T22:15:20.566386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-02T22:15:20.57022Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-173127 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-02T22:15:20.570315Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T22:15:20.571268Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-02T22:15:20.574085Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-02T22:15:20.574999Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-02T22:15:20.582645Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-02T22:15:20.582716Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:16:28 up  6:58,  0 user,  load average: 3.25, 2.21, 2.01
	Linux old-k8s-version-173127 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [85ae11f7046e7c4088564223c92310bf84ed1291c50dba819b3c63aa0fab5bab] <==
	I1002 22:15:28.257537       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:15:28.257796       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 22:15:28.257935       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:15:28.257946       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:15:28.257957       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:15:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:15:28.514676       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:15:28.514820       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:15:28.514859       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:15:28.515649       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:15:58.518178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:15:58.518185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:15:58.518364       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 22:15:58.518419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 22:16:00.015715       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:16:00.015761       1 metrics.go:72] Registering metrics
	I1002 22:16:00.015844       1 controller.go:711] "Syncing nftables rules"
	I1002 22:16:08.514824       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:16:08.514884       1 main.go:301] handling current node
	I1002 22:16:18.514573       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:16:18.514605       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce52286ca959aa1094c5360e368796a7e4a0f842b8a4a79f996f8d6b468e669c] <==
	I1002 22:15:27.328167       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:15:27.350682       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 22:15:27.354760       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 22:15:27.354853       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 22:15:27.354976       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:15:27.361371       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1002 22:15:27.365009       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 22:15:27.369980       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1002 22:15:27.370676       1 aggregator.go:166] initial CRD sync complete...
	I1002 22:15:27.370731       1 autoregister_controller.go:141] Starting autoregister controller
	I1002 22:15:27.370761       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:15:27.370795       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:15:27.375777       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1002 22:15:27.438118       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 22:15:27.892052       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:15:31.269042       1 controller.go:624] quota admission added evaluator for: namespaces
	I1002 22:15:31.354991       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 22:15:31.390339       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:15:31.404304       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:15:31.419169       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 22:15:31.487145       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.227.172"}
	I1002 22:15:31.525072       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.175.107"}
	I1002 22:15:41.259430       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 22:15:41.289726       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 22:15:41.341479       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [127dfbf0ed811f4c5b76702b20dd8c06a78db529e7c9966d4050c03afa84f582] <==
	I1002 22:15:41.298877       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-zm7zk"
	I1002 22:15:41.312255       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-vlglj"
	I1002 22:15:41.312557       1 shared_informer.go:318] Caches are synced for stateful set
	I1002 22:15:41.326388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.398015ms"
	I1002 22:15:41.334488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="56.40859ms"
	I1002 22:15:41.347113       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1002 22:15:41.362162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.546916ms"
	I1002 22:15:41.362249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.714µs"
	I1002 22:15:41.381684       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="262.724µs"
	I1002 22:15:41.388751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.244679ms"
	I1002 22:15:41.389571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.896µs"
	I1002 22:15:41.398246       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 22:15:41.401917       1 shared_informer.go:318] Caches are synced for resource quota
	I1002 22:15:41.798199       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 22:15:41.798244       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 22:15:41.800770       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 22:15:48.015782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="476.652µs"
	I1002 22:15:49.033313       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.655µs"
	I1002 22:15:50.037402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.162µs"
	I1002 22:15:53.075721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="31.083985ms"
	I1002 22:15:53.075827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.851µs"
	I1002 22:16:04.141305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.553µs"
	I1002 22:16:07.946474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.817166ms"
	I1002 22:16:07.947388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.006µs"
	I1002 22:16:11.637052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.522µs"
	
	
	==> kube-proxy [084caaaf7525a0c67194fc9d1407cd6fe2a876234d337900e84141524d04d741] <==
	I1002 22:15:30.196002       1 server_others.go:69] "Using iptables proxy"
	I1002 22:15:30.243721       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1002 22:15:30.998456       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:15:31.015834       1 server_others.go:152] "Using iptables Proxier"
	I1002 22:15:31.015939       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 22:15:31.015970       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 22:15:31.018172       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 22:15:31.018479       1 server.go:846] "Version info" version="v1.28.0"
	I1002 22:15:31.042119       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:15:31.043547       1 config.go:188] "Starting service config controller"
	I1002 22:15:31.051494       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 22:15:31.059113       1 config.go:315] "Starting node config controller"
	I1002 22:15:31.059141       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 22:15:31.043775       1 config.go:97] "Starting endpoint slice config controller"
	I1002 22:15:31.078693       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 22:15:31.151716       1 shared_informer.go:318] Caches are synced for service config
	I1002 22:15:31.160519       1 shared_informer.go:318] Caches are synced for node config
	I1002 22:15:31.179348       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b311e5165e9b94b09ec1ae5755a3fa63d5356ede8a5b8e26c85b073fad8c5351] <==
	I1002 22:15:25.088278       1 serving.go:348] Generated self-signed cert in-memory
	I1002 22:15:30.446722       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1002 22:15:30.446760       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:15:30.465521       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 22:15:30.465824       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 22:15:30.465892       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 22:15:30.466292       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 22:15:30.479043       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:15:30.479142       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 22:15:30.491190       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:15:30.491215       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 22:15:30.574257       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1002 22:15:30.615234       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 22:15:30.615370       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: I1002 22:15:41.427800     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/988e0d52-fae3-445f-bc46-8ed21d729763-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-zm7zk\" (UID: \"988e0d52-fae3-445f-bc46-8ed21d729763\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk"
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: I1002 22:15:41.427859     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czx5t\" (UniqueName: \"kubernetes.io/projected/988e0d52-fae3-445f-bc46-8ed21d729763-kube-api-access-czx5t\") pod \"dashboard-metrics-scraper-5f989dc9cf-zm7zk\" (UID: \"988e0d52-fae3-445f-bc46-8ed21d729763\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk"
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: I1002 22:15:41.427894     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lblj\" (UniqueName: \"kubernetes.io/projected/02d32051-a965-4fa4-9a6e-e03d13faab7d-kube-api-access-8lblj\") pod \"kubernetes-dashboard-8694d4445c-vlglj\" (UID: \"02d32051-a965-4fa4-9a6e-e03d13faab7d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vlglj"
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: I1002 22:15:41.427922     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/02d32051-a965-4fa4-9a6e-e03d13faab7d-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-vlglj\" (UID: \"02d32051-a965-4fa4-9a6e-e03d13faab7d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vlglj"
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: W1002 22:15:41.675035     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/crio-cc46f91e350134f8f72091c652ba9298c4550ce8137306b9ef70612e59ba96d2 WatchSource:0}: Error finding container cc46f91e350134f8f72091c652ba9298c4550ce8137306b9ef70612e59ba96d2: Status 404 returned error can't find the container with id cc46f91e350134f8f72091c652ba9298c4550ce8137306b9ef70612e59ba96d2
	Oct 02 22:15:41 old-k8s-version-173127 kubelet[779]: W1002 22:15:41.710876     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a2aece711092a995bed0f5b4cd61861fd073d28c3e82afcc59f484b1fc354481/crio-5825d7223aec814779c0a31836e476f489f086fdf090ae03755317648c12db17 WatchSource:0}: Error finding container 5825d7223aec814779c0a31836e476f489f086fdf090ae03755317648c12db17: Status 404 returned error can't find the container with id 5825d7223aec814779c0a31836e476f489f086fdf090ae03755317648c12db17
	Oct 02 22:15:47 old-k8s-version-173127 kubelet[779]: I1002 22:15:47.998379     779 scope.go:117] "RemoveContainer" containerID="6aeee2dc53691b4d67502dc8c2b2f4cf3b42cd174f6bcc05df2a2f4f883bfc00"
	Oct 02 22:15:49 old-k8s-version-173127 kubelet[779]: I1002 22:15:49.002816     779 scope.go:117] "RemoveContainer" containerID="6aeee2dc53691b4d67502dc8c2b2f4cf3b42cd174f6bcc05df2a2f4f883bfc00"
	Oct 02 22:15:49 old-k8s-version-173127 kubelet[779]: I1002 22:15:49.003758     779 scope.go:117] "RemoveContainer" containerID="01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e"
	Oct 02 22:15:49 old-k8s-version-173127 kubelet[779]: E1002 22:15:49.004104     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zm7zk_kubernetes-dashboard(988e0d52-fae3-445f-bc46-8ed21d729763)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk" podUID="988e0d52-fae3-445f-bc46-8ed21d729763"
	Oct 02 22:15:50 old-k8s-version-173127 kubelet[779]: I1002 22:15:50.016110     779 scope.go:117] "RemoveContainer" containerID="01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e"
	Oct 02 22:15:50 old-k8s-version-173127 kubelet[779]: E1002 22:15:50.016403     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zm7zk_kubernetes-dashboard(988e0d52-fae3-445f-bc46-8ed21d729763)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk" podUID="988e0d52-fae3-445f-bc46-8ed21d729763"
	Oct 02 22:15:51 old-k8s-version-173127 kubelet[779]: I1002 22:15:51.621760     779 scope.go:117] "RemoveContainer" containerID="01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e"
	Oct 02 22:15:51 old-k8s-version-173127 kubelet[779]: E1002 22:15:51.622531     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zm7zk_kubernetes-dashboard(988e0d52-fae3-445f-bc46-8ed21d729763)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk" podUID="988e0d52-fae3-445f-bc46-8ed21d729763"
	Oct 02 22:16:00 old-k8s-version-173127 kubelet[779]: I1002 22:16:00.107758     779 scope.go:117] "RemoveContainer" containerID="63792d50544950ac3771e739f5e2e88e49ec0b557f03faae01aa93ce906075d5"
	Oct 02 22:16:00 old-k8s-version-173127 kubelet[779]: I1002 22:16:00.332172     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vlglj" podStartSLOduration=8.648447374 podCreationTimestamp="2025-10-02 22:15:41 +0000 UTC" firstStartedPulling="2025-10-02 22:15:41.7236844 +0000 UTC m=+24.258080296" lastFinishedPulling="2025-10-02 22:15:52.407301081 +0000 UTC m=+34.941696969" observedRunningTime="2025-10-02 22:15:53.042844062 +0000 UTC m=+35.577239950" watchObservedRunningTime="2025-10-02 22:16:00.332064047 +0000 UTC m=+42.866459943"
	Oct 02 22:16:02 old-k8s-version-173127 kubelet[779]: I1002 22:16:02.786977     779 scope.go:117] "RemoveContainer" containerID="01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e"
	Oct 02 22:16:03 old-k8s-version-173127 kubelet[779]: I1002 22:16:03.115159     779 scope.go:117] "RemoveContainer" containerID="01c63b42b288c005206588c7ca0d77a5098300b8621487e47bfc9660445df09e"
	Oct 02 22:16:04 old-k8s-version-173127 kubelet[779]: I1002 22:16:04.119637     779 scope.go:117] "RemoveContainer" containerID="e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d"
	Oct 02 22:16:04 old-k8s-version-173127 kubelet[779]: E1002 22:16:04.120403     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zm7zk_kubernetes-dashboard(988e0d52-fae3-445f-bc46-8ed21d729763)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk" podUID="988e0d52-fae3-445f-bc46-8ed21d729763"
	Oct 02 22:16:11 old-k8s-version-173127 kubelet[779]: I1002 22:16:11.622177     779 scope.go:117] "RemoveContainer" containerID="e878da3125cc7504cca8abd5b213d485a124451d3e0f5704773e1e2c7313f73d"
	Oct 02 22:16:11 old-k8s-version-173127 kubelet[779]: E1002 22:16:11.622954     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-zm7zk_kubernetes-dashboard(988e0d52-fae3-445f-bc46-8ed21d729763)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-zm7zk" podUID="988e0d52-fae3-445f-bc46-8ed21d729763"
	Oct 02 22:16:22 old-k8s-version-173127 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:16:22 old-k8s-version-173127 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:16:22 old-k8s-version-173127 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9c9d715891fb1fe0c652d51c4da130ea0afd5634cc063f2c6ba51a847dbbf57f] <==
	2025/10/02 22:15:52 Starting overwatch
	2025/10/02 22:15:52 Using namespace: kubernetes-dashboard
	2025/10/02 22:15:52 Using in-cluster config to connect to apiserver
	2025/10/02 22:15:52 Using secret token for csrf signing
	2025/10/02 22:15:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 22:15:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 22:15:52 Successful initial request to the apiserver, version: v1.28.0
	2025/10/02 22:15:52 Generating JWE encryption key
	2025/10/02 22:15:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 22:15:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 22:15:53 Initializing JWE encryption key from synchronized object
	2025/10/02 22:15:53 Creating in-cluster Sidecar client
	2025/10/02 22:15:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:15:53 Serving insecurely on HTTP port: 9090
	2025/10/02 22:16:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [262c0c70277ce895db2bcada5bdf1a9907c5b11a07877c05a26b7ca693ce2690] <==
	I1002 22:16:00.377518       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 22:16:00.411245       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:16:00.425726       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 22:16:17.827063       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:16:17.827248       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-173127_375fcc10-7709-4208-9fe8-5bcbd61471d5!
	I1002 22:16:17.827929       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3777a98b-9fef-490e-ac94-3a602207c6ab", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-173127_375fcc10-7709-4208-9fe8-5bcbd61471d5 became leader
	I1002 22:16:17.931590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-173127_375fcc10-7709-4208-9fe8-5bcbd61471d5!
	
	
	==> storage-provisioner [63792d50544950ac3771e739f5e2e88e49ec0b557f03faae01aa93ce906075d5] <==
	I1002 22:15:28.978896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 22:15:59.050542       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-173127 -n old-k8s-version-173127
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-173127 -n old-k8s-version-173127: exit status 2 (396.830833ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-173127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (319.06433ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:16:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-230628 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-230628 describe deploy/metrics-server -n kube-system: exit status 1 (87.95624ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-230628 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-230628
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-230628:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef",
	        "Created": "2025-10-02T22:15:06.94474657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1455849,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:15:07.019767773Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/hosts",
	        "LogPath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef-json.log",
	        "Name": "/default-k8s-diff-port-230628",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-230628:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-230628",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef",
	                "LowerDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-230628",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-230628/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-230628",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-230628",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-230628",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "504191e8a9409b83a6166770b0a836bb66263ae090a3625ff72303fa9d8eb64f",
	            "SandboxKey": "/var/run/docker/netns/504191e8a940",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34556"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34557"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34560"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34558"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34559"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-230628": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:5b:6a:09:4f:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b0ce512013df0626d99cabbb56683ffeecfa8da9b150b56cbd6d68363d36b91b",
	                    "EndpointID": "2ff5f0e755700617db1a9341545c3e0715a4d263fdb037f31c655e2e7285f46c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-230628",
	                        "75dade69ea95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-230628 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-230628 logs -n 25: (1.894115556s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-198170 sudo crio config                                                                                                                                                                                                             │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ delete  │ -p cilium-198170                                                                                                                                                                                                                              │ cilium-198170                │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │ 02 Oct 25 22:04 UTC │
	│ start   │ -p force-systemd-env-915858 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-915858     │ jenkins │ v1.37.0 │ 02 Oct 25 22:04 UTC │                     │
	│ ssh     │ force-systemd-flag-292135 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-292135    │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	│ delete  │ -p force-systemd-flag-292135                                                                                                                                                                                                                  │ force-systemd-flag-292135    │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:10 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:11 UTC │
	│ delete  │ -p force-systemd-env-915858                                                                                                                                                                                                                   │ force-systemd-env-915858     │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p cert-options-280401 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ cert-options-280401 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ -p cert-options-280401 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ delete  │ -p cert-options-280401                                                                                                                                                                                                                        │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:14 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-173127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │                     │
	│ stop    │ -p old-k8s-version-173127 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ delete  │ -p cert-expiration-247949                                                                                                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-173127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ image   │ old-k8s-version-173127 image list --format=json                                                                                                                                                                                               │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ pause   │ -p old-k8s-version-173127 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:16:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:16:32.288983 1461647 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:16:32.289400 1461647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:16:32.289435 1461647 out.go:374] Setting ErrFile to fd 2...
	I1002 22:16:32.289453 1461647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:16:32.289744 1461647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:16:32.290269 1461647 out.go:368] Setting JSON to false
	I1002 22:16:32.291264 1461647 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25118,"bootTime":1759418275,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:16:32.291360 1461647 start.go:140] virtualization:  
	I1002 22:16:32.295757 1461647 out.go:179] * [embed-certs-080134] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:16:32.299362 1461647 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:16:32.299446 1461647 notify.go:220] Checking for updates...
	I1002 22:16:32.303111 1461647 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:16:32.306558 1461647 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:16:32.309906 1461647 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:16:32.313344 1461647 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:16:32.316525 1461647 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:16:32.320186 1461647 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:16:32.320345 1461647 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:16:32.345496 1461647 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:16:32.345625 1461647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:16:32.415063 1461647 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:16:32.405757029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:16:32.415172 1461647 docker.go:318] overlay module found
	I1002 22:16:32.419312 1461647 out.go:179] * Using the docker driver based on user configuration
	I1002 22:16:32.422413 1461647 start.go:304] selected driver: docker
	I1002 22:16:32.422432 1461647 start.go:924] validating driver "docker" against <nil>
	I1002 22:16:32.422445 1461647 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:16:32.423248 1461647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:16:32.478270 1461647 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:16:32.468856096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:16:32.478431 1461647 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 22:16:32.478667 1461647 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:16:32.481664 1461647 out.go:179] * Using Docker driver with root privileges
	I1002 22:16:32.484692 1461647 cni.go:84] Creating CNI manager for ""
	I1002 22:16:32.484766 1461647 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:16:32.484779 1461647 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 22:16:32.484858 1461647 start.go:348] cluster config:
	{Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:16:32.488154 1461647 out.go:179] * Starting "embed-certs-080134" primary control-plane node in "embed-certs-080134" cluster
	I1002 22:16:32.491232 1461647 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:16:32.494180 1461647 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:16:32.497038 1461647 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:16:32.497098 1461647 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:16:32.497108 1461647 cache.go:58] Caching tarball of preloaded images
	I1002 22:16:32.497215 1461647 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:16:32.497225 1461647 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:16:32.497337 1461647 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/config.json ...
	I1002 22:16:32.497353 1461647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/config.json: {Name:mk92463e5c6ebee7b8cbc54b274d2c1841f8925f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:16:32.497511 1461647 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:16:32.523535 1461647 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:16:32.523560 1461647 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:16:32.523573 1461647 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:16:32.523598 1461647 start.go:360] acquireMachinesLock for embed-certs-080134: {Name:mkb3c88b79da323c6aaa02ac6130cdaf0d74178c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:16:32.523714 1461647 start.go:364] duration metric: took 90.148µs to acquireMachinesLock for "embed-certs-080134"
	I1002 22:16:32.523741 1461647 start.go:93] Provisioning new machine with config: &{Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:16:32.523819 1461647 start.go:125] createHost starting for "" (driver="docker")
	I1002 22:16:32.527242 1461647 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 22:16:32.527491 1461647 start.go:159] libmachine.API.Create for "embed-certs-080134" (driver="docker")
	I1002 22:16:32.527537 1461647 client.go:168] LocalClient.Create starting
	I1002 22:16:32.527610 1461647 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem
	I1002 22:16:32.527654 1461647 main.go:141] libmachine: Decoding PEM data...
	I1002 22:16:32.527673 1461647 main.go:141] libmachine: Parsing certificate...
	I1002 22:16:32.527727 1461647 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem
	I1002 22:16:32.527746 1461647 main.go:141] libmachine: Decoding PEM data...
	I1002 22:16:32.527756 1461647 main.go:141] libmachine: Parsing certificate...
	I1002 22:16:32.528117 1461647 cli_runner.go:164] Run: docker network inspect embed-certs-080134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 22:16:32.547728 1461647 cli_runner.go:211] docker network inspect embed-certs-080134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 22:16:32.547831 1461647 network_create.go:284] running [docker network inspect embed-certs-080134] to gather additional debugging logs...
	I1002 22:16:32.547851 1461647 cli_runner.go:164] Run: docker network inspect embed-certs-080134
	W1002 22:16:32.564969 1461647 cli_runner.go:211] docker network inspect embed-certs-080134 returned with exit code 1
	I1002 22:16:32.565001 1461647 network_create.go:287] error running [docker network inspect embed-certs-080134]: docker network inspect embed-certs-080134: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-080134 not found
	I1002 22:16:32.565015 1461647 network_create.go:289] output of [docker network inspect embed-certs-080134]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-080134 not found
	
	** /stderr **
	I1002 22:16:32.565116 1461647 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:16:32.583909 1461647 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bc3b296e1a81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:eb:59:11:9d:81} reservation:<nil>}
	I1002 22:16:32.584259 1461647 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4d7d491e9676 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:00:74:bd:3c:5f} reservation:<nil>}
	I1002 22:16:32.584628 1461647 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-314191adf21d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:ac:91:58:2a:d7} reservation:<nil>}
	I1002 22:16:32.584885 1461647 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b0ce512013df IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:49:ad:35:66:e4} reservation:<nil>}
	I1002 22:16:32.585347 1461647 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a24700}
	I1002 22:16:32.585371 1461647 network_create.go:124] attempt to create docker network embed-certs-080134 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 22:16:32.585435 1461647 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-080134 embed-certs-080134
	I1002 22:16:32.657196 1461647 network_create.go:108] docker network embed-certs-080134 192.168.85.0/24 created
	I1002 22:16:32.657233 1461647 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-080134" container
	I1002 22:16:32.657318 1461647 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 22:16:32.673938 1461647 cli_runner.go:164] Run: docker volume create embed-certs-080134 --label name.minikube.sigs.k8s.io=embed-certs-080134 --label created_by.minikube.sigs.k8s.io=true
	I1002 22:16:32.692109 1461647 oci.go:103] Successfully created a docker volume embed-certs-080134
	I1002 22:16:32.692197 1461647 cli_runner.go:164] Run: docker run --rm --name embed-certs-080134-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-080134 --entrypoint /usr/bin/test -v embed-certs-080134:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 22:16:33.232860 1461647 oci.go:107] Successfully prepared a docker volume embed-certs-080134
	I1002 22:16:33.232916 1461647 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:16:33.232937 1461647 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 22:16:33.233014 1461647 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-080134:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 02 22:16:24 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:24.84524751Z" level=info msg="Created container b6f3d1a3a3584b8097abf412942018689aaa7bbe8e12af5f2a36810353416245: kube-system/coredns-66bc5c9577-jvqks/coredns" id=ddfa9a3b-09d2-48b2-9cc1-99402739770e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:16:24 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:24.850333287Z" level=info msg="Starting container: b6f3d1a3a3584b8097abf412942018689aaa7bbe8e12af5f2a36810353416245" id=2fb6fef7-6c27-4002-acf8-6c1bb713bba4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:16:24 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:24.855328176Z" level=info msg="Started container" PID=1726 containerID=b6f3d1a3a3584b8097abf412942018689aaa7bbe8e12af5f2a36810353416245 description=kube-system/coredns-66bc5c9577-jvqks/coredns id=2fb6fef7-6c27-4002-acf8-6c1bb713bba4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e512017ab5bd33bf03703884d916722a976ea6b57fa291947d5ab7a17978410b
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.173543038Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2202d773-c7a7-4ea6-8d0a-1858bcd086f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.173614774Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.188120796Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2596b1a99c129009ccdf6a320052742254a5a91b5104f09b4000b05e21e15394 UID:d5cdadc6-b07a-446b-881e-e2297b0df1af NetNS:/var/run/netns/83aeb609-9a9d-4cff-96e0-ceaba7bc21f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004a6958}] Aliases:map[]}"
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.188301396Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.199371074Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2596b1a99c129009ccdf6a320052742254a5a91b5104f09b4000b05e21e15394 UID:d5cdadc6-b07a-446b-881e-e2297b0df1af NetNS:/var/run/netns/83aeb609-9a9d-4cff-96e0-ceaba7bc21f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004a6958}] Aliases:map[]}"
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.199525065Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.20291753Z" level=info msg="Ran pod sandbox 2596b1a99c129009ccdf6a320052742254a5a91b5104f09b4000b05e21e15394 with infra container: default/busybox/POD" id=2202d773-c7a7-4ea6-8d0a-1858bcd086f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.204513423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ece5b330-2a35-46bf-8ac2-22c10642ff1f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.204745107Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ece5b330-2a35-46bf-8ac2-22c10642ff1f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.204856588Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ece5b330-2a35-46bf-8ac2-22c10642ff1f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.207575522Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a1c2dec1-5a50-4885-95b8-fb5eafe4a2fc name=/runtime.v1.ImageService/PullImage
	Oct 02 22:16:28 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:28.209723365Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 22:16:30 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:30.274263328Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a1c2dec1-5a50-4885-95b8-fb5eafe4a2fc name=/runtime.v1.ImageService/PullImage
	Oct 02 22:16:30 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:30.274940091Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8862edb4-fb19-4b31-b592-058a53276cc3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:16:30 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:30.278264545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3a433171-6187-4845-99d4-c64894a6a00f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:16:30 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:30.285395806Z" level=info msg="Creating container: default/busybox/busybox" id=a1279af7-b422-49c3-8a93-529b1b8b3427 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:16:30 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:30.28625006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:16:30 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:30.290933652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:16:30 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:30.291393304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:16:30 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:30.306888059Z" level=info msg="Created container 89f18d12d8b63184a789d04aae60658113535219130b055f1e1df2874bbbc437: default/busybox/busybox" id=a1279af7-b422-49c3-8a93-529b1b8b3427 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:16:30 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:30.307801142Z" level=info msg="Starting container: 89f18d12d8b63184a789d04aae60658113535219130b055f1e1df2874bbbc437" id=cf566b9c-aacf-4f75-9f35-250801184a68 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:16:30 default-k8s-diff-port-230628 crio[834]: time="2025-10-02T22:16:30.309942462Z" level=info msg="Started container" PID=1777 containerID=89f18d12d8b63184a789d04aae60658113535219130b055f1e1df2874bbbc437 description=default/busybox/busybox id=cf566b9c-aacf-4f75-9f35-250801184a68 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2596b1a99c129009ccdf6a320052742254a5a91b5104f09b4000b05e21e15394
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	89f18d12d8b63       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   2596b1a99c129       busybox                                                default
	b6f3d1a3a3584       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   e512017ab5bd3       coredns-66bc5c9577-jvqks                               kube-system
	1b19c694f3ef0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   88b9cca5ff186       storage-provisioner                                    kube-system
	56b90ab171ebc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   27c98fc86cd91       kindnet-lvsjr                                          kube-system
	0a28ec99c655f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   fbc08ba5b8be4       kube-proxy-4l9vx                                       kube-system
	98bccca616287       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   b2ab1d6676bf0       kube-apiserver-default-k8s-diff-port-230628            kube-system
	b2b090b6ddbd2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   62247cd6fab20       kube-scheduler-default-k8s-diff-port-230628            kube-system
	502f5bbf70f57       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   6189738dc4e23       etcd-default-k8s-diff-port-230628                      kube-system
	2495b032ab2f2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   d4947a5dfc360       kube-controller-manager-default-k8s-diff-port-230628   kube-system
	
	
	==> coredns [b6f3d1a3a3584b8097abf412942018689aaa7bbe8e12af5f2a36810353416245] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57802 - 62915 "HINFO IN 8248686110141368780.8992627714200434181. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012863008s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-230628
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-230628
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=default-k8s-diff-port-230628
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_15_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:15:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-230628
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:16:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:16:39 +0000   Thu, 02 Oct 2025 22:15:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:16:39 +0000   Thu, 02 Oct 2025 22:15:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:16:39 +0000   Thu, 02 Oct 2025 22:15:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:16:39 +0000   Thu, 02 Oct 2025 22:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-230628
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 e608a1fbb6124cfdbf5ba6c4ce66a621
	  System UUID:                6317d78e-133b-4770-8f0b-21b4d8e9ee44
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-jvqks                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-default-k8s-diff-port-230628                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-lvsjr                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-default-k8s-diff-port-230628             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-230628    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-4l9vx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-230628             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node default-k8s-diff-port-230628 event: Registered Node default-k8s-diff-port-230628 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-230628 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 21:43] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:44] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [502f5bbf70f57e0e9b706436fe218353b502c7e498613c8831c7dc4cf26f6415] <==
	{"level":"warn","ts":"2025-10-02T22:15:33.301301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.320513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.339958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.358161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.376066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.394656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.410807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.427794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.472998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.487020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.508679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.518513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.536977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.554225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.571687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.589105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.606416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.623402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.640458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.657488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.682016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.710346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.730916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.751189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:15:33.847395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57900","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:16:39 up  6:58,  0 user,  load average: 3.13, 2.22, 2.02
	Linux default-k8s-diff-port-230628 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [56b90ab171ebcb82934c9dbf40c56f0da7ad528026f4546e7f9ad5ad2fa5cf9b] <==
	I1002 22:15:43.814348       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:15:43.814614       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 22:15:43.814741       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:15:43.814753       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:15:43.814765       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:15:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:15:44.106528       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:15:44.106609       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:15:44.106645       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:15:44.107519       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:16:14.107270       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:16:14.107288       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:16:14.107384       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 22:16:14.107472       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 22:16:15.407416       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:16:15.407448       1 metrics.go:72] Registering metrics
	I1002 22:16:15.407509       1 controller.go:711] "Syncing nftables rules"
	I1002 22:16:24.114148       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:16:24.114282       1 main.go:301] handling current node
	I1002 22:16:34.107883       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:16:34.108051       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98bccca6162870888f3ea13aa563e65d75e4a6b711cd41d32b49cf6df7f110c0] <==
	I1002 22:15:34.785471       1 policy_source.go:240] refreshing policies
	I1002 22:15:34.815470       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 22:15:34.843343       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 22:15:34.844256       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:15:34.862384       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:15:34.866993       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:15:34.995930       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:15:35.502883       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 22:15:35.510310       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 22:15:35.510338       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:15:36.286720       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:15:36.338947       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:15:36.434677       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 22:15:36.443267       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 22:15:36.444682       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 22:15:36.455890       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 22:15:36.669367       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:15:37.483510       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:15:37.503876       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 22:15:37.519315       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 22:15:42.039354       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 22:15:42.613102       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:15:42.682232       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:15:42.896551       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1002 22:16:37.055917       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:33240: use of closed network connection
	
	
	==> kube-controller-manager [2495b032ab2f215419b395f5e30c77832e2ffdd70c3f73ffe0c42240cd8c137c] <==
	I1002 22:15:41.713655       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 22:15:41.720025       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 22:15:41.720114       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 22:15:41.756867       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 22:15:41.756975       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 22:15:41.757055       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-230628"
	I1002 22:15:41.757110       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 22:15:41.757176       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:15:41.757182       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:15:41.757188       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 22:15:41.759992       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 22:15:41.760312       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 22:15:41.760355       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:15:41.760727       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 22:15:41.771433       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 22:15:41.771617       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 22:15:41.771654       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 22:15:41.771660       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 22:15:41.771666       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 22:15:41.772355       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:15:41.782166       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:15:41.782210       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:15:41.806155       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-230628" podCIDRs=["10.244.0.0/24"]
	I1002 22:15:41.843357       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:16:26.763169       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0a28ec99c655ff163e35d337cad61930ae6df9dbc1a93798ce5766a05d47bda5] <==
	I1002 22:15:44.167730       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:15:44.287694       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:15:44.388656       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:15:44.388698       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 22:15:44.388761       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:15:44.447639       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:15:44.450128       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:15:44.463729       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:15:44.466396       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:15:44.466423       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:15:44.472299       1 config.go:200] "Starting service config controller"
	I1002 22:15:44.476457       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:15:44.476691       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:15:44.476705       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:15:44.476856       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:15:44.476867       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:15:44.479925       1 config.go:309] "Starting node config controller"
	I1002 22:15:44.479948       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:15:44.479955       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:15:44.576994       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:15:44.577093       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:15:44.577120       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b2b090b6ddbd20920170aabf9e3f272d5ac7301cde65a27c1b12449800e76dee] <==
	E1002 22:15:34.809198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 22:15:34.809238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 22:15:34.809282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 22:15:34.808067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 22:15:34.815151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 22:15:34.816928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 22:15:34.817012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 22:15:34.817076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 22:15:34.817193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 22:15:34.822107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 22:15:34.822387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 22:15:34.822688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 22:15:34.822750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 22:15:35.684602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 22:15:35.685352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 22:15:35.738976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 22:15:35.757928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 22:15:35.780852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 22:15:35.797986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 22:15:35.836513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 22:15:35.863096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 22:15:35.911532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 22:15:35.930003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 22:15:36.022234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1002 22:15:37.793373       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:15:38 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:38.736304    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-230628" podStartSLOduration=2.736285707 podStartE2EDuration="2.736285707s" podCreationTimestamp="2025-10-02 22:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:15:38.671157838 +0000 UTC m=+1.341282412" watchObservedRunningTime="2025-10-02 22:15:38.736285707 +0000 UTC m=+1.406410290"
	Oct 02 22:15:41 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:41.810298    1302 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 22:15:41 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:41.811110    1302 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 22:15:43 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:43.198158    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/08caf7ea-dac3-4c1f-877c-05db698d12e7-kube-proxy\") pod \"kube-proxy-4l9vx\" (UID: \"08caf7ea-dac3-4c1f-877c-05db698d12e7\") " pod="kube-system/kube-proxy-4l9vx"
	Oct 02 22:15:43 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:43.198199    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08caf7ea-dac3-4c1f-877c-05db698d12e7-xtables-lock\") pod \"kube-proxy-4l9vx\" (UID: \"08caf7ea-dac3-4c1f-877c-05db698d12e7\") " pod="kube-system/kube-proxy-4l9vx"
	Oct 02 22:15:43 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:43.198221    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08caf7ea-dac3-4c1f-877c-05db698d12e7-lib-modules\") pod \"kube-proxy-4l9vx\" (UID: \"08caf7ea-dac3-4c1f-877c-05db698d12e7\") " pod="kube-system/kube-proxy-4l9vx"
	Oct 02 22:15:43 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:43.198241    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hphlp\" (UniqueName: \"kubernetes.io/projected/08caf7ea-dac3-4c1f-877c-05db698d12e7-kube-api-access-hphlp\") pod \"kube-proxy-4l9vx\" (UID: \"08caf7ea-dac3-4c1f-877c-05db698d12e7\") " pod="kube-system/kube-proxy-4l9vx"
	Oct 02 22:15:43 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:43.198271    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8186228-286b-41f0-a1c6-73ee4855d875-lib-modules\") pod \"kindnet-lvsjr\" (UID: \"f8186228-286b-41f0-a1c6-73ee4855d875\") " pod="kube-system/kindnet-lvsjr"
	Oct 02 22:15:43 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:43.198288    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqk5x\" (UniqueName: \"kubernetes.io/projected/f8186228-286b-41f0-a1c6-73ee4855d875-kube-api-access-kqk5x\") pod \"kindnet-lvsjr\" (UID: \"f8186228-286b-41f0-a1c6-73ee4855d875\") " pod="kube-system/kindnet-lvsjr"
	Oct 02 22:15:43 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:43.198307    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f8186228-286b-41f0-a1c6-73ee4855d875-cni-cfg\") pod \"kindnet-lvsjr\" (UID: \"f8186228-286b-41f0-a1c6-73ee4855d875\") " pod="kube-system/kindnet-lvsjr"
	Oct 02 22:15:43 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:43.198325    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8186228-286b-41f0-a1c6-73ee4855d875-xtables-lock\") pod \"kindnet-lvsjr\" (UID: \"f8186228-286b-41f0-a1c6-73ee4855d875\") " pod="kube-system/kindnet-lvsjr"
	Oct 02 22:15:43 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:43.343079    1302 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 22:15:44 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:44.659287    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lvsjr" podStartSLOduration=2.659269675 podStartE2EDuration="2.659269675s" podCreationTimestamp="2025-10-02 22:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:15:44.652637932 +0000 UTC m=+7.322762515" watchObservedRunningTime="2025-10-02 22:15:44.659269675 +0000 UTC m=+7.329394250"
	Oct 02 22:15:44 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:15:44.720229    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4l9vx" podStartSLOduration=2.720208399 podStartE2EDuration="2.720208399s" podCreationTimestamp="2025-10-02 22:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:15:44.694222088 +0000 UTC m=+7.364346679" watchObservedRunningTime="2025-10-02 22:15:44.720208399 +0000 UTC m=+7.390332974"
	Oct 02 22:16:24 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:16:24.256098    1302 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 02 22:16:24 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:16:24.349200    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6rr8\" (UniqueName: \"kubernetes.io/projected/ca549a84-19db-4830-a4a8-c9101d37fa26-kube-api-access-t6rr8\") pod \"storage-provisioner\" (UID: \"ca549a84-19db-4830-a4a8-c9101d37fa26\") " pod="kube-system/storage-provisioner"
	Oct 02 22:16:24 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:16:24.349515    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ca549a84-19db-4830-a4a8-c9101d37fa26-tmp\") pod \"storage-provisioner\" (UID: \"ca549a84-19db-4830-a4a8-c9101d37fa26\") " pod="kube-system/storage-provisioner"
	Oct 02 22:16:24 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:16:24.450101    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljhsz\" (UniqueName: \"kubernetes.io/projected/206b4ea5-2f69-433d-bc97-57d4534c0d3e-kube-api-access-ljhsz\") pod \"coredns-66bc5c9577-jvqks\" (UID: \"206b4ea5-2f69-433d-bc97-57d4534c0d3e\") " pod="kube-system/coredns-66bc5c9577-jvqks"
	Oct 02 22:16:24 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:16:24.450350    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/206b4ea5-2f69-433d-bc97-57d4534c0d3e-config-volume\") pod \"coredns-66bc5c9577-jvqks\" (UID: \"206b4ea5-2f69-433d-bc97-57d4534c0d3e\") " pod="kube-system/coredns-66bc5c9577-jvqks"
	Oct 02 22:16:24 default-k8s-diff-port-230628 kubelet[1302]: W1002 22:16:24.639917    1302 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/crio-88b9cca5ff186053ff4aa6cf01142988e96d2b46539905c9c38df4928ceba8de WatchSource:0}: Error finding container 88b9cca5ff186053ff4aa6cf01142988e96d2b46539905c9c38df4928ceba8de: Status 404 returned error can't find the container with id 88b9cca5ff186053ff4aa6cf01142988e96d2b46539905c9c38df4928ceba8de
	Oct 02 22:16:24 default-k8s-diff-port-230628 kubelet[1302]: W1002 22:16:24.733603    1302 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/crio-e512017ab5bd33bf03703884d916722a976ea6b57fa291947d5ab7a17978410b WatchSource:0}: Error finding container e512017ab5bd33bf03703884d916722a976ea6b57fa291947d5ab7a17978410b: Status 404 returned error can't find the container with id e512017ab5bd33bf03703884d916722a976ea6b57fa291947d5ab7a17978410b
	Oct 02 22:16:25 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:16:25.733026    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.733005628 podStartE2EDuration="41.733005628s" podCreationTimestamp="2025-10-02 22:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:16:25.732126119 +0000 UTC m=+48.402250702" watchObservedRunningTime="2025-10-02 22:16:25.733005628 +0000 UTC m=+48.403130203"
	Oct 02 22:16:27 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:16:27.861981    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jvqks" podStartSLOduration=45.861960712 podStartE2EDuration="45.861960712s" podCreationTimestamp="2025-10-02 22:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:16:25.754210714 +0000 UTC m=+48.424335289" watchObservedRunningTime="2025-10-02 22:16:27.861960712 +0000 UTC m=+50.532085295"
	Oct 02 22:16:27 default-k8s-diff-port-230628 kubelet[1302]: I1002 22:16:27.979798    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8r6l\" (UniqueName: \"kubernetes.io/projected/d5cdadc6-b07a-446b-881e-e2297b0df1af-kube-api-access-n8r6l\") pod \"busybox\" (UID: \"d5cdadc6-b07a-446b-881e-e2297b0df1af\") " pod="default/busybox"
	Oct 02 22:16:28 default-k8s-diff-port-230628 kubelet[1302]: W1002 22:16:28.203556    1302 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/crio-2596b1a99c129009ccdf6a320052742254a5a91b5104f09b4000b05e21e15394 WatchSource:0}: Error finding container 2596b1a99c129009ccdf6a320052742254a5a91b5104f09b4000b05e21e15394: Status 404 returned error can't find the container with id 2596b1a99c129009ccdf6a320052742254a5a91b5104f09b4000b05e21e15394
	
	
	==> storage-provisioner [1b19c694f3ef017ff6bbe250d8145ddee5500e4d1d742bb6c87d314e6b871a20] <==
	I1002 22:16:24.809607       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 22:16:24.842657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:16:24.842769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 22:16:24.846302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:24.872295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:16:24.888841       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:16:24.912769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-230628_73081e05-1add-414f-bc22-c4c14a3db4f5!
	I1002 22:16:24.894824       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40ee210b-5507-4164-84d5-3e6947882443", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-230628_73081e05-1add-414f-bc22-c4c14a3db4f5 became leader
	W1002 22:16:24.943071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:24.948130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:16:25.013073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-230628_73081e05-1add-414f-bc22-c4c14a3db4f5!
	W1002 22:16:26.951695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:26.959981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:28.962787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:28.969352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:30.973165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:30.980665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:32.987899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:33.011801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:35.015409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:35.020776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:37.029269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:37.112149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:39.116212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:16:39.143403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-230628 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-230628 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-230628 --alsologtostderr -v=1: exit status 80 (2.463801067s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-230628 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:18:03.006285 1467177 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:18:03.006529 1467177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:18:03.006562 1467177 out.go:374] Setting ErrFile to fd 2...
	I1002 22:18:03.006581 1467177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:18:03.007042 1467177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:18:03.007719 1467177 out.go:368] Setting JSON to false
	I1002 22:18:03.007823 1467177 mustload.go:65] Loading cluster: default-k8s-diff-port-230628
	I1002 22:18:03.008584 1467177 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:18:03.009366 1467177 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:18:03.028760 1467177 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:18:03.029106 1467177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:18:03.087210 1467177 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:18:03.077157926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:18:03.087927 1467177 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-230628 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 22:18:03.094158 1467177 out.go:179] * Pausing node default-k8s-diff-port-230628 ... 
	I1002 22:18:03.097155 1467177 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:18:03.097529 1467177 ssh_runner.go:195] Run: systemctl --version
	I1002 22:18:03.097699 1467177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:18:03.118170 1467177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:18:03.217198 1467177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:18:03.234476 1467177 pause.go:51] kubelet running: true
	I1002 22:18:03.234576 1467177 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:18:03.477279 1467177 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:18:03.477383 1467177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:18:03.547535 1467177 cri.go:89] found id: "a7993422fac72852a98c0079ccc3410fc8956c0994cbb0968feb6358a834dc24"
	I1002 22:18:03.547563 1467177 cri.go:89] found id: "fee643046efa55d27a185fe7948708690f63a1b1ed6d6f57c5ba3929605c01f6"
	I1002 22:18:03.547574 1467177 cri.go:89] found id: "809868061d07f29dabb55aa5c484c5f78ef5dec1d1e5ea99ae41f8744323b963"
	I1002 22:18:03.547578 1467177 cri.go:89] found id: "cc20e87d8de3a0fdc4fed95d57eccb1fc685cc5777882f10e8b13feea7ad8e51"
	I1002 22:18:03.547582 1467177 cri.go:89] found id: "c64d369cac4035241b589e76349999be9beaef376d813cab8178ed3ae1854b68"
	I1002 22:18:03.547585 1467177 cri.go:89] found id: "01235239b3d7b66144bf8695871cb046f63119cbc8df4573140b6f6c6e591533"
	I1002 22:18:03.547588 1467177 cri.go:89] found id: "f86d294e2252fadd80d85a001b6e4a46abde9a9392fd88c77dde413f0ee440ad"
	I1002 22:18:03.547591 1467177 cri.go:89] found id: "d628dbd9a32a7daf04eca0db048fc51c27a1ad8a1f6949519b2d129d15e23abf"
	I1002 22:18:03.547595 1467177 cri.go:89] found id: "0ad0bf20345abfcc25bcf338fba89dbe33e5f1b70801eda12faa774271ccb6cc"
	I1002 22:18:03.547601 1467177 cri.go:89] found id: "7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137"
	I1002 22:18:03.547604 1467177 cri.go:89] found id: "cc4ee39a94830173206b7e8710c573909fde2513242b876906c16d6dff3d2c4c"
	I1002 22:18:03.547607 1467177 cri.go:89] found id: ""
	I1002 22:18:03.547658 1467177 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:18:03.559028 1467177 retry.go:31] will retry after 187.90412ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:18:03Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:18:03.747468 1467177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:18:03.762850 1467177 pause.go:51] kubelet running: false
	I1002 22:18:03.762938 1467177 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:18:03.967017 1467177 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:18:03.967094 1467177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:18:04.051268 1467177 cri.go:89] found id: "a7993422fac72852a98c0079ccc3410fc8956c0994cbb0968feb6358a834dc24"
	I1002 22:18:04.051289 1467177 cri.go:89] found id: "fee643046efa55d27a185fe7948708690f63a1b1ed6d6f57c5ba3929605c01f6"
	I1002 22:18:04.051294 1467177 cri.go:89] found id: "809868061d07f29dabb55aa5c484c5f78ef5dec1d1e5ea99ae41f8744323b963"
	I1002 22:18:04.051297 1467177 cri.go:89] found id: "cc20e87d8de3a0fdc4fed95d57eccb1fc685cc5777882f10e8b13feea7ad8e51"
	I1002 22:18:04.051301 1467177 cri.go:89] found id: "c64d369cac4035241b589e76349999be9beaef376d813cab8178ed3ae1854b68"
	I1002 22:18:04.051304 1467177 cri.go:89] found id: "01235239b3d7b66144bf8695871cb046f63119cbc8df4573140b6f6c6e591533"
	I1002 22:18:04.051307 1467177 cri.go:89] found id: "f86d294e2252fadd80d85a001b6e4a46abde9a9392fd88c77dde413f0ee440ad"
	I1002 22:18:04.051310 1467177 cri.go:89] found id: "d628dbd9a32a7daf04eca0db048fc51c27a1ad8a1f6949519b2d129d15e23abf"
	I1002 22:18:04.051313 1467177 cri.go:89] found id: "0ad0bf20345abfcc25bcf338fba89dbe33e5f1b70801eda12faa774271ccb6cc"
	I1002 22:18:04.051322 1467177 cri.go:89] found id: "7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137"
	I1002 22:18:04.051325 1467177 cri.go:89] found id: "cc4ee39a94830173206b7e8710c573909fde2513242b876906c16d6dff3d2c4c"
	I1002 22:18:04.051331 1467177 cri.go:89] found id: ""
	I1002 22:18:04.051389 1467177 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:18:04.064404 1467177 retry.go:31] will retry after 380.850524ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:18:04Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:18:04.446079 1467177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:18:04.459492 1467177 pause.go:51] kubelet running: false
	I1002 22:18:04.459556 1467177 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:18:04.625766 1467177 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:18:04.625866 1467177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:18:04.701047 1467177 cri.go:89] found id: "a7993422fac72852a98c0079ccc3410fc8956c0994cbb0968feb6358a834dc24"
	I1002 22:18:04.701069 1467177 cri.go:89] found id: "fee643046efa55d27a185fe7948708690f63a1b1ed6d6f57c5ba3929605c01f6"
	I1002 22:18:04.701108 1467177 cri.go:89] found id: "809868061d07f29dabb55aa5c484c5f78ef5dec1d1e5ea99ae41f8744323b963"
	I1002 22:18:04.701121 1467177 cri.go:89] found id: "cc20e87d8de3a0fdc4fed95d57eccb1fc685cc5777882f10e8b13feea7ad8e51"
	I1002 22:18:04.701125 1467177 cri.go:89] found id: "c64d369cac4035241b589e76349999be9beaef376d813cab8178ed3ae1854b68"
	I1002 22:18:04.701130 1467177 cri.go:89] found id: "01235239b3d7b66144bf8695871cb046f63119cbc8df4573140b6f6c6e591533"
	I1002 22:18:04.701133 1467177 cri.go:89] found id: "f86d294e2252fadd80d85a001b6e4a46abde9a9392fd88c77dde413f0ee440ad"
	I1002 22:18:04.701136 1467177 cri.go:89] found id: "d628dbd9a32a7daf04eca0db048fc51c27a1ad8a1f6949519b2d129d15e23abf"
	I1002 22:18:04.701139 1467177 cri.go:89] found id: "0ad0bf20345abfcc25bcf338fba89dbe33e5f1b70801eda12faa774271ccb6cc"
	I1002 22:18:04.701146 1467177 cri.go:89] found id: "7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137"
	I1002 22:18:04.701151 1467177 cri.go:89] found id: "cc4ee39a94830173206b7e8710c573909fde2513242b876906c16d6dff3d2c4c"
	I1002 22:18:04.701154 1467177 cri.go:89] found id: ""
	I1002 22:18:04.701216 1467177 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:18:04.712197 1467177 retry.go:31] will retry after 388.174312ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:18:04Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:18:05.101497 1467177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:18:05.117607 1467177 pause.go:51] kubelet running: false
	I1002 22:18:05.117676 1467177 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:18:05.294941 1467177 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:18:05.295023 1467177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:18:05.378097 1467177 cri.go:89] found id: "a7993422fac72852a98c0079ccc3410fc8956c0994cbb0968feb6358a834dc24"
	I1002 22:18:05.378171 1467177 cri.go:89] found id: "fee643046efa55d27a185fe7948708690f63a1b1ed6d6f57c5ba3929605c01f6"
	I1002 22:18:05.378190 1467177 cri.go:89] found id: "809868061d07f29dabb55aa5c484c5f78ef5dec1d1e5ea99ae41f8744323b963"
	I1002 22:18:05.378227 1467177 cri.go:89] found id: "cc20e87d8de3a0fdc4fed95d57eccb1fc685cc5777882f10e8b13feea7ad8e51"
	I1002 22:18:05.378250 1467177 cri.go:89] found id: "c64d369cac4035241b589e76349999be9beaef376d813cab8178ed3ae1854b68"
	I1002 22:18:05.378271 1467177 cri.go:89] found id: "01235239b3d7b66144bf8695871cb046f63119cbc8df4573140b6f6c6e591533"
	I1002 22:18:05.378288 1467177 cri.go:89] found id: "f86d294e2252fadd80d85a001b6e4a46abde9a9392fd88c77dde413f0ee440ad"
	I1002 22:18:05.378328 1467177 cri.go:89] found id: "d628dbd9a32a7daf04eca0db048fc51c27a1ad8a1f6949519b2d129d15e23abf"
	I1002 22:18:05.378349 1467177 cri.go:89] found id: "0ad0bf20345abfcc25bcf338fba89dbe33e5f1b70801eda12faa774271ccb6cc"
	I1002 22:18:05.378369 1467177 cri.go:89] found id: "7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137"
	I1002 22:18:05.378387 1467177 cri.go:89] found id: "cc4ee39a94830173206b7e8710c573909fde2513242b876906c16d6dff3d2c4c"
	I1002 22:18:05.378416 1467177 cri.go:89] found id: ""
	I1002 22:18:05.378504 1467177 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:18:05.394208 1467177 out.go:203] 
	W1002 22:18:05.397065 1467177 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:18:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:18:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 22:18:05.397084 1467177 out.go:285] * 
	* 
	W1002 22:18:05.406374 1467177 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:18:05.411231 1467177 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-230628 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-230628
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-230628:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef",
	        "Created": "2025-10-02T22:15:06.94474657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1464385,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:16:53.185365109Z",
	            "FinishedAt": "2025-10-02T22:16:52.118863211Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/hosts",
	        "LogPath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef-json.log",
	        "Name": "/default-k8s-diff-port-230628",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-230628:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-230628",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef",
	                "LowerDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-230628",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-230628/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-230628",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-230628",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-230628",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89debf29f1e53388f53ce6ee9dba0cfaad5ed694d7fddddbcaf246a5ce7571ac",
	            "SandboxKey": "/var/run/docker/netns/89debf29f1e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34571"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34572"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34575"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34573"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34574"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-230628": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:a0:59:4c:5b:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b0ce512013df0626d99cabbb56683ffeecfa8da9b150b56cbd6d68363d36b91b",
	                    "EndpointID": "f7cd068edcb68bacff0d9567761c7f7d81467d3fafeb93b5cfebebea9a1d55a3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-230628",
	                        "75dade69ea95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628: exit status 2 (373.188882ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-230628 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-230628 logs -n 25: (1.327427194s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:11 UTC │
	│ delete  │ -p force-systemd-env-915858                                                                                                                                                                                                                   │ force-systemd-env-915858     │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p cert-options-280401 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ cert-options-280401 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ -p cert-options-280401 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ delete  │ -p cert-options-280401                                                                                                                                                                                                                        │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:14 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-173127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │                     │
	│ stop    │ -p old-k8s-version-173127 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ delete  │ -p cert-expiration-247949                                                                                                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-173127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ image   │ old-k8s-version-173127 image list --format=json                                                                                                                                                                                               │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ pause   │ -p old-k8s-version-173127 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-230628 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-230628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-230628 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ pause   │ -p default-k8s-diff-port-230628 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:16:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:16:52.815690 1464250 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:16:52.815816 1464250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:16:52.815822 1464250 out.go:374] Setting ErrFile to fd 2...
	I1002 22:16:52.815827 1464250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:16:52.816195 1464250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:16:52.816652 1464250 out.go:368] Setting JSON to false
	I1002 22:16:52.817603 1464250 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25138,"bootTime":1759418275,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:16:52.817703 1464250 start.go:140] virtualization:  
	I1002 22:16:52.820985 1464250 out.go:179] * [default-k8s-diff-port-230628] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:16:52.825015 1464250 notify.go:220] Checking for updates...
	I1002 22:16:52.825905 1464250 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:16:52.829013 1464250 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:16:52.832208 1464250 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:16:52.835173 1464250 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:16:52.838076 1464250 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:16:52.841779 1464250 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:16:52.845265 1464250 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:16:52.845844 1464250 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:16:52.885245 1464250 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:16:52.885452 1464250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:16:52.979074 1464250 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 22:16:52.969867837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:16:52.979176 1464250 docker.go:318] overlay module found
	I1002 22:16:52.982431 1464250 out.go:179] * Using the docker driver based on existing profile
	I1002 22:16:52.985315 1464250 start.go:304] selected driver: docker
	I1002 22:16:52.985335 1464250 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:16:52.985449 1464250 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:16:52.986258 1464250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:16:53.084126 1464250 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 22:16:53.06918457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:16:53.084467 1464250 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:16:53.084504 1464250 cni.go:84] Creating CNI manager for ""
	I1002 22:16:53.084559 1464250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:16:53.084591 1464250 start.go:348] cluster config:
	{Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:16:53.087838 1464250 out.go:179] * Starting "default-k8s-diff-port-230628" primary control-plane node in "default-k8s-diff-port-230628" cluster
	I1002 22:16:53.090692 1464250 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:16:53.093491 1464250 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:16:53.096271 1464250 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:16:53.096326 1464250 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:16:53.096335 1464250 cache.go:58] Caching tarball of preloaded images
	I1002 22:16:53.096433 1464250 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:16:53.096443 1464250 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:16:53.096554 1464250 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/config.json ...
	I1002 22:16:53.096778 1464250 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:16:53.123538 1464250 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:16:53.123557 1464250 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:16:53.123580 1464250 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:16:53.123605 1464250 start.go:360] acquireMachinesLock for default-k8s-diff-port-230628: {Name:mk03e8992f46bd2d7f7874118d4f399e26ab9e4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:16:53.123670 1464250 start.go:364] duration metric: took 47.04µs to acquireMachinesLock for "default-k8s-diff-port-230628"
	I1002 22:16:53.123691 1464250 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:16:53.123704 1464250 fix.go:54] fixHost starting: 
	I1002 22:16:53.123992 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:16:53.144033 1464250 fix.go:112] recreateIfNeeded on default-k8s-diff-port-230628: state=Stopped err=<nil>
	W1002 22:16:53.144061 1464250 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:16:53.494318 1461647 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:16:53.495105 1461647 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:16:54.209761 1461647 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:16:54.898068 1461647 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:16:55.163488 1461647 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:16:56.207823 1461647 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:16:56.417222 1461647 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:16:56.418360 1461647 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:16:56.426417 1461647 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:16:56.430091 1461647 out.go:252]   - Booting up control plane ...
	I1002 22:16:56.430209 1461647 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:16:56.430302 1461647 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:16:56.430374 1461647 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:16:56.449134 1461647 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:16:56.449255 1461647 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:16:56.458304 1461647 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:16:56.458417 1461647 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:16:56.458470 1461647 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:16:56.582476 1461647 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:16:56.582603 1461647 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:16:53.147360 1464250 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-230628" ...
	I1002 22:16:53.147462 1464250 cli_runner.go:164] Run: docker start default-k8s-diff-port-230628
	I1002 22:16:53.486318 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:16:53.511179 1464250 kic.go:430] container "default-k8s-diff-port-230628" state is running.
	I1002 22:16:53.511595 1464250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:16:53.540322 1464250 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/config.json ...
	I1002 22:16:53.540565 1464250 machine.go:93] provisionDockerMachine start ...
	I1002 22:16:53.540629 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:53.564571 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:53.564883 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:53.564892 1464250 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:16:53.570095 1464250 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:16:56.722513 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-230628
	
	I1002 22:16:56.722536 1464250 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-230628"
	I1002 22:16:56.722616 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:56.745126 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:56.745433 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:56.745450 1464250 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-230628 && echo "default-k8s-diff-port-230628" | sudo tee /etc/hostname
	I1002 22:16:56.909371 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-230628
	
	I1002 22:16:56.909492 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:56.935785 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:56.936109 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:56.936127 1464250 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-230628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-230628/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-230628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:16:57.078389 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:16:57.078416 1464250 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:16:57.078452 1464250 ubuntu.go:190] setting up certificates
	I1002 22:16:57.078464 1464250 provision.go:84] configureAuth start
	I1002 22:16:57.078528 1464250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:16:57.103962 1464250 provision.go:143] copyHostCerts
	I1002 22:16:57.104043 1464250 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:16:57.104067 1464250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:16:57.104157 1464250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:16:57.104282 1464250 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:16:57.104293 1464250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:16:57.104323 1464250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:16:57.104417 1464250 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:16:57.104435 1464250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:16:57.104470 1464250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:16:57.104542 1464250 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-230628 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-230628 localhost minikube]
	I1002 22:16:57.173268 1464250 provision.go:177] copyRemoteCerts
	I1002 22:16:57.173348 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:16:57.173401 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.197458 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:57.310188 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:16:57.337815 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 22:16:57.363629 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:16:57.393898 1464250 provision.go:87] duration metric: took 315.415822ms to configureAuth
	I1002 22:16:57.393966 1464250 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:16:57.394255 1464250 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:16:57.394409 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.419379 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:57.419762 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:57.419788 1464250 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:16:57.747609 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:16:57.747631 1464250 machine.go:96] duration metric: took 4.207056415s to provisionDockerMachine
	I1002 22:16:57.747678 1464250 start.go:293] postStartSetup for "default-k8s-diff-port-230628" (driver="docker")
	I1002 22:16:57.747689 1464250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:16:57.747749 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:16:57.747799 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.767729 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:57.866276 1464250 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:16:57.869924 1464250 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:16:57.869952 1464250 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:16:57.869963 1464250 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:16:57.870021 1464250 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:16:57.870125 1464250 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:16:57.870234 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:16:57.877904 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:16:57.896072 1464250 start.go:296] duration metric: took 148.377821ms for postStartSetup
	I1002 22:16:57.896224 1464250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:16:57.896319 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.913370 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:58.011325 1464250 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:16:58.017811 1464250 fix.go:56] duration metric: took 4.894105584s for fixHost
	I1002 22:16:58.017838 1464250 start.go:83] releasing machines lock for "default-k8s-diff-port-230628", held for 4.894158063s
	I1002 22:16:58.017916 1464250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:16:58.043658 1464250 ssh_runner.go:195] Run: cat /version.json
	I1002 22:16:58.043745 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:58.044077 1464250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:16:58.044149 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:58.085645 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:58.093817 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:58.328980 1464250 ssh_runner.go:195] Run: systemctl --version
	I1002 22:16:58.338589 1464250 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:16:58.403158 1464250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:16:58.410450 1464250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:16:58.410529 1464250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:16:58.420452 1464250 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:16:58.420472 1464250 start.go:495] detecting cgroup driver to use...
	I1002 22:16:58.420503 1464250 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:16:58.420550 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:16:58.441159 1464250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:16:58.469385 1464250 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:16:58.469532 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:16:58.489039 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:16:58.514417 1464250 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:16:58.729270 1464250 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:16:58.899442 1464250 docker.go:234] disabling docker service ...
	I1002 22:16:58.899528 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:16:58.927300 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:16:58.956494 1464250 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:16:59.153935 1464250 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:16:59.380952 1464250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:16:59.404925 1464250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:16:59.428535 1464250 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:16:59.428602 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.443195 1464250 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:16:59.443275 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.453072 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.470762 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.480680 1464250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:16:59.495366 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.511397 1464250 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.525950 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.542756 1464250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:16:59.555536 1464250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:16:59.567342 1464250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:16:59.777346 1464250 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:16:59.996294 1464250 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:16:59.996443 1464250 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:17:00.000992 1464250 start.go:563] Will wait 60s for crictl version
	I1002 22:17:00.001101 1464250 ssh_runner.go:195] Run: which crictl
	I1002 22:17:00.006874 1464250 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:17:00.049873 1464250 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:17:00.049989 1464250 ssh_runner.go:195] Run: crio --version
	I1002 22:17:00.113645 1464250 ssh_runner.go:195] Run: crio --version
	I1002 22:17:00.192200 1464250 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:16:57.582618 1461647 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001518043s
	I1002 22:16:57.586746 1461647 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:16:57.586851 1461647 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 22:16:57.586982 1461647 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:16:57.587073 1461647 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:17:00.195337 1464250 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-230628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:17:00.224465 1464250 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:17:00.234568 1464250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:17:00.266674 1464250 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:17:00.266810 1464250 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:17:00.266880 1464250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:17:00.354979 1464250 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:17:00.355085 1464250 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:17:00.355202 1464250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:17:00.405746 1464250 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:17:00.405772 1464250 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:17:00.405795 1464250 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1002 22:17:00.405906 1464250 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-230628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:17:00.405997 1464250 ssh_runner.go:195] Run: crio config
	I1002 22:17:00.513755 1464250 cni.go:84] Creating CNI manager for ""
	I1002 22:17:00.513782 1464250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:17:00.513809 1464250 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:17:00.513840 1464250 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-230628 NodeName:default-k8s-diff-port-230628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:17:00.513999 1464250 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-230628"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:17:00.514105 1464250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:17:00.526167 1464250 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:17:00.526268 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:17:00.538316 1464250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 22:17:00.561096 1464250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:17:00.587710 1464250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1002 22:17:00.616497 1464250 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:17:00.626507 1464250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:17:00.645309 1464250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:17:00.849459 1464250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:17:00.879244 1464250 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628 for IP: 192.168.76.2
	I1002 22:17:00.879319 1464250 certs.go:195] generating shared ca certs ...
	I1002 22:17:00.879364 1464250 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:00.879591 1464250 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:17:00.879694 1464250 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:17:00.879734 1464250 certs.go:257] generating profile certs ...
	I1002 22:17:00.879888 1464250 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.key
	I1002 22:17:00.880000 1464250 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595
	I1002 22:17:00.880084 1464250 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key
	I1002 22:17:00.880249 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:17:00.880327 1464250 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:17:00.880373 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:17:00.880479 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:17:00.880557 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:17:00.880606 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:17:00.880718 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:17:00.881770 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:17:00.946651 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:17:01.000213 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:17:01.078566 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:17:01.166792 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 22:17:01.226124 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:17:01.295374 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:17:01.350643 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:17:01.399862 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:17:01.436700 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:17:01.493113 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:17:01.539726 1464250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:17:01.564705 1464250 ssh_runner.go:195] Run: openssl version
	I1002 22:17:01.575769 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:17:01.588296 1464250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:17:01.592905 1464250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:17:01.593075 1464250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:17:01.662785 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:17:01.675327 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:17:01.699302 1464250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:17:01.703743 1464250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:17:01.703887 1464250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:17:01.778073 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:17:01.802386 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:17:01.832716 1464250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:17:01.837808 1464250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:17:01.837956 1464250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:17:01.957141 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:17:01.975541 1464250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:17:01.983041 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:17:02.120630 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:17:02.356670 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:17:02.597674 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:17:02.719438 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:17:02.835972 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:17:02.935511 1464250 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:17:02.935685 1464250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:17:02.935805 1464250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:17:03.063735 1464250 cri.go:89] found id: "01235239b3d7b66144bf8695871cb046f63119cbc8df4573140b6f6c6e591533"
	I1002 22:17:03.063817 1464250 cri.go:89] found id: "f86d294e2252fadd80d85a001b6e4a46abde9a9392fd88c77dde413f0ee440ad"
	I1002 22:17:03.063838 1464250 cri.go:89] found id: "d628dbd9a32a7daf04eca0db048fc51c27a1ad8a1f6949519b2d129d15e23abf"
	I1002 22:17:03.063863 1464250 cri.go:89] found id: "0ad0bf20345abfcc25bcf338fba89dbe33e5f1b70801eda12faa774271ccb6cc"
	I1002 22:17:03.063903 1464250 cri.go:89] found id: ""
	I1002 22:17:03.064013 1464250 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:17:03.102669 1464250 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:17:03Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:17:03.102852 1464250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:17:03.138162 1464250 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:17:03.138183 1464250 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:17:03.138242 1464250 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:17:03.154641 1464250 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:17:03.155232 1464250 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-230628" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:17:03.155450 1464250 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-230628" cluster setting kubeconfig missing "default-k8s-diff-port-230628" context setting]
	I1002 22:17:03.155902 1464250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:03.157939 1464250 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:17:03.191443 1464250 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 22:17:03.191529 1464250 kubeadm.go:601] duration metric: took 53.339576ms to restartPrimaryControlPlane
	I1002 22:17:03.191564 1464250 kubeadm.go:402] duration metric: took 256.062788ms to StartCluster
	I1002 22:17:03.191608 1464250 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:03.191760 1464250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:17:03.192606 1464250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:03.193181 1464250 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:17:03.193294 1464250 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:17:03.193376 1464250 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-230628"
	I1002 22:17:03.193392 1464250 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-230628"
	W1002 22:17:03.193399 1464250 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:17:03.193421 1464250 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:17:03.193952 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.193267 1464250 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:17:03.194791 1464250 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-230628"
	I1002 22:17:03.194824 1464250 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-230628"
	W1002 22:17:03.194832 1464250 addons.go:247] addon dashboard should already be in state true
	I1002 22:17:03.194858 1464250 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:17:03.195313 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.195535 1464250 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-230628"
	I1002 22:17:03.195558 1464250 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-230628"
	I1002 22:17:03.195827 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.211810 1464250 out.go:179] * Verifying Kubernetes components...
	I1002 22:17:03.215190 1464250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:17:03.256729 1464250 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:17:03.262848 1464250 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-230628"
	W1002 22:17:03.262873 1464250 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:17:03.262900 1464250 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:17:03.263503 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.264020 1464250 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:17:03.266167 1464250 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:03.266192 1464250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:17:03.266266 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:17:03.269780 1464250 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 22:17:04.442415 1461647 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.851469395s
	I1002 22:17:03.272664 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:17:03.272693 1464250 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:17:03.272773 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:17:03.315483 1464250 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:03.315505 1464250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:17:03.315578 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:17:03.321039 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:17:03.346244 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:17:03.364512 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:17:03.807998 1464250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:03.826796 1464250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:17:03.854664 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:17:03.854685 1464250 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:17:03.944157 1464250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:04.013117 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:17:04.013140 1464250 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:17:04.123747 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:17:04.123823 1464250 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:17:04.288279 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:17:04.288354 1464250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:17:04.364870 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:17:04.364959 1464250 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:17:04.407691 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:17:04.407764 1464250 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:17:04.480556 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:17:04.480585 1464250 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:17:04.521232 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:17:04.521258 1464250 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:17:04.559450 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:17:04.559476 1464250 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:17:04.605034 1464250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:17:07.541309 1461647 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.953707302s
	I1002 22:17:09.588432 1461647 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 12.001620033s
	I1002 22:17:09.621336 1461647 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 22:17:09.639794 1461647 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 22:17:09.670650 1461647 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 22:17:09.671131 1461647 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-080134 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 22:17:09.705280 1461647 kubeadm.go:318] [bootstrap-token] Using token: n45vfv.yum1oz8wyqc2j4g1
	I1002 22:17:09.708338 1461647 out.go:252]   - Configuring RBAC rules ...
	I1002 22:17:09.712243 1461647 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 22:17:09.740982 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 22:17:09.759389 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 22:17:09.766849 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 22:17:09.771823 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 22:17:09.779533 1461647 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 22:17:09.996111 1461647 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 22:17:10.584421 1461647 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 22:17:11.003554 1461647 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 22:17:11.005488 1461647 kubeadm.go:318] 
	I1002 22:17:11.005580 1461647 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 22:17:11.005594 1461647 kubeadm.go:318] 
	I1002 22:17:11.005676 1461647 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 22:17:11.005686 1461647 kubeadm.go:318] 
	I1002 22:17:11.005712 1461647 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 22:17:11.008239 1461647 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 22:17:11.008309 1461647 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 22:17:11.008324 1461647 kubeadm.go:318] 
	I1002 22:17:11.008379 1461647 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 22:17:11.008389 1461647 kubeadm.go:318] 
	I1002 22:17:11.008436 1461647 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 22:17:11.008445 1461647 kubeadm.go:318] 
	I1002 22:17:11.008497 1461647 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 22:17:11.008576 1461647 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 22:17:11.008648 1461647 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 22:17:11.008658 1461647 kubeadm.go:318] 
	I1002 22:17:11.008769 1461647 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 22:17:11.008849 1461647 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 22:17:11.008860 1461647 kubeadm.go:318] 
	I1002 22:17:11.008943 1461647 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token n45vfv.yum1oz8wyqc2j4g1 \
	I1002 22:17:11.009049 1461647 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 22:17:11.009073 1461647 kubeadm.go:318] 	--control-plane 
	I1002 22:17:11.009083 1461647 kubeadm.go:318] 
	I1002 22:17:11.009186 1461647 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 22:17:11.009198 1461647 kubeadm.go:318] 
	I1002 22:17:11.009279 1461647 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token n45vfv.yum1oz8wyqc2j4g1 \
	I1002 22:17:11.009383 1461647 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 22:17:11.023465 1461647 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:17:11.023726 1461647 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:17:11.023853 1461647 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:17:11.023875 1461647 cni.go:84] Creating CNI manager for ""
	I1002 22:17:11.023882 1461647 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:17:11.027244 1461647 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 22:17:11.030352 1461647 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:17:11.035499 1461647 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 22:17:11.035518 1461647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 22:17:11.081114 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:17:11.871876 1461647 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:17:11.872009 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:11.872082 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-080134 minikube.k8s.io/updated_at=2025_10_02T22_17_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=embed-certs-080134 minikube.k8s.io/primary=true
	I1002 22:17:12.165350 1461647 ops.go:34] apiserver oom_adj: -16
	I1002 22:17:12.165457 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:12.606880 1464250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.798845764s)
	I1002 22:17:12.606926 1464250 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.780095412s)
	I1002 22:17:12.606945 1464250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.662767941s)
	I1002 22:17:12.606965 1464250 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-230628" to be "Ready" ...
	I1002 22:17:12.633571 1464250 node_ready.go:49] node "default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:12.633598 1464250 node_ready.go:38] duration metric: took 26.619873ms for node "default-k8s-diff-port-230628" to be "Ready" ...
	I1002 22:17:12.633611 1464250 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:17:12.633667 1464250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:17:12.636084 1464250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.030999638s)
	I1002 22:17:12.639185 1464250 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-230628 addons enable metrics-server
	
	I1002 22:17:12.642199 1464250 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 22:17:12.645102 1464250 addons.go:514] duration metric: took 9.451804219s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 22:17:12.654910 1464250 api_server.go:72] duration metric: took 9.460557178s to wait for apiserver process to appear ...
	I1002 22:17:12.654986 1464250 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:17:12.655023 1464250 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1002 22:17:12.664190 1464250 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1002 22:17:12.666133 1464250 api_server.go:141] control plane version: v1.34.1
	I1002 22:17:12.666181 1464250 api_server.go:131] duration metric: took 11.174799ms to wait for apiserver health ...
	I1002 22:17:12.666191 1464250 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:17:12.670701 1464250 system_pods.go:59] 8 kube-system pods found
	I1002 22:17:12.670783 1464250 system_pods.go:61] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:12.670807 1464250 system_pods.go:61] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:17:12.670826 1464250 system_pods.go:61] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:17:12.670849 1464250 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:17:12.670869 1464250 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:17:12.670889 1464250 system_pods.go:61] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:17:12.670922 1464250 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:17:12.670956 1464250 system_pods.go:61] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Running
	I1002 22:17:12.670978 1464250 system_pods.go:74] duration metric: took 4.779311ms to wait for pod list to return data ...
	I1002 22:17:12.670999 1464250 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:17:12.674413 1464250 default_sa.go:45] found service account: "default"
	I1002 22:17:12.674491 1464250 default_sa.go:55] duration metric: took 3.468336ms for default service account to be created ...
	I1002 22:17:12.674515 1464250 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:17:12.680553 1464250 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:12.680630 1464250 system_pods.go:89] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:12.680659 1464250 system_pods.go:89] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:17:12.680678 1464250 system_pods.go:89] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:17:12.680700 1464250 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:17:12.680721 1464250 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:17:12.680750 1464250 system_pods.go:89] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:17:12.680772 1464250 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:17:12.680790 1464250 system_pods.go:89] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Running
	I1002 22:17:12.680814 1464250 system_pods.go:126] duration metric: took 6.281332ms to wait for k8s-apps to be running ...
	I1002 22:17:12.680841 1464250 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:17:12.680914 1464250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:17:12.697527 1464250 system_svc.go:56] duration metric: took 16.67669ms WaitForService to wait for kubelet
	I1002 22:17:12.697600 1464250 kubeadm.go:586] duration metric: took 9.503251307s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:17:12.697634 1464250 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:17:12.702159 1464250 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:17:12.702246 1464250 node_conditions.go:123] node cpu capacity is 2
	I1002 22:17:12.702279 1464250 node_conditions.go:105] duration metric: took 4.623843ms to run NodePressure ...
	I1002 22:17:12.702306 1464250 start.go:241] waiting for startup goroutines ...
	I1002 22:17:12.702326 1464250 start.go:246] waiting for cluster config update ...
	I1002 22:17:12.702350 1464250 start.go:255] writing updated cluster config ...
	I1002 22:17:12.702676 1464250 ssh_runner.go:195] Run: rm -f paused
	I1002 22:17:12.706651 1464250 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:17:12.712818 1464250 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jvqks" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:12.666440 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:13.166439 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:13.665999 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:14.165574 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:14.665914 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:15.165512 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:15.665798 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:16.166361 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:16.458779 1461647 kubeadm.go:1113] duration metric: took 4.586825209s to wait for elevateKubeSystemPrivileges
	I1002 22:17:16.458818 1461647 kubeadm.go:402] duration metric: took 28.362818929s to StartCluster
	I1002 22:17:16.458834 1461647 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:16.458901 1461647 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:17:16.460309 1461647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:16.460534 1461647 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:17:16.460673 1461647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:17:16.460941 1461647 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:17:16.460978 1461647 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:17:16.461037 1461647 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-080134"
	I1002 22:17:16.461052 1461647 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-080134"
	I1002 22:17:16.461073 1461647 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:17:16.461859 1461647 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:17:16.462004 1461647 addons.go:69] Setting default-storageclass=true in profile "embed-certs-080134"
	I1002 22:17:16.462017 1461647 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-080134"
	I1002 22:17:16.462286 1461647 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:17:16.470097 1461647 out.go:179] * Verifying Kubernetes components...
	I1002 22:17:16.475911 1461647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:17:16.506516 1461647 addons.go:238] Setting addon default-storageclass=true in "embed-certs-080134"
	I1002 22:17:16.506559 1461647 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:17:16.506990 1461647 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:17:16.513185 1461647 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:17:16.516051 1461647 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:16.516076 1461647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:17:16.516144 1461647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:17:16.543068 1461647 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:16.543090 1461647 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:17:16.543152 1461647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:17:16.567762 1461647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34566 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:17:16.579891 1461647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34566 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:17:17.128545 1461647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:17.152138 1461647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 22:17:17.152325 1461647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 22:17:14.759931 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:17.220444 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	I1002 22:17:17.432367 1461647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:18.492317 1461647 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.339938615s)
	I1002 22:17:18.492378 1461647 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.340158418s)
	I1002 22:17:18.492406 1461647 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 22:17:18.493493 1461647 node_ready.go:35] waiting up to 6m0s for node "embed-certs-080134" to be "Ready" ...
	I1002 22:17:18.492334 1461647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.363709152s)
	I1002 22:17:18.493868 1461647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.061434042s)
	I1002 22:17:18.546845 1461647 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 22:17:18.549820 1461647 addons.go:514] duration metric: took 2.088826282s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 22:17:18.997495 1461647 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-080134" context rescaled to 1 replicas
	W1002 22:17:20.497277 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:19.718973 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:21.728877 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:22.497342 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:24.998466 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:24.219680 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:26.720462 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:27.497483 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:29.996995 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:28.723923 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:31.220083 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:32.496612 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:34.496804 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:36.996831 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:33.719403 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:36.218685 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:38.996960 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:40.997613 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:38.718898 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:40.719450 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:42.721875 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:43.002209 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:45.500512 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:45.226594 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:47.718748 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	I1002 22:17:49.718479 1464250 pod_ready.go:94] pod "coredns-66bc5c9577-jvqks" is "Ready"
	I1002 22:17:49.718503 1464250 pod_ready.go:86] duration metric: took 37.005609154s for pod "coredns-66bc5c9577-jvqks" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.721438 1464250 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.726333 1464250 pod_ready.go:94] pod "etcd-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:49.726362 1464250 pod_ready.go:86] duration metric: took 4.900277ms for pod "etcd-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.730900 1464250 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.735612 1464250 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:49.735638 1464250 pod_ready.go:86] duration metric: took 4.702176ms for pod "kube-apiserver-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.737984 1464250 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.916418 1464250 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:49.916457 1464250 pod_ready.go:86] duration metric: took 178.447296ms for pod "kube-controller-manager-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:50.116788 1464250 pod_ready.go:83] waiting for pod "kube-proxy-4l9vx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:50.516203 1464250 pod_ready.go:94] pod "kube-proxy-4l9vx" is "Ready"
	I1002 22:17:50.516231 1464250 pod_ready.go:86] duration metric: took 399.418014ms for pod "kube-proxy-4l9vx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:50.716110 1464250 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:51.116917 1464250 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:51.116944 1464250 pod_ready.go:86] duration metric: took 400.805238ms for pod "kube-scheduler-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:51.116958 1464250 pod_ready.go:40] duration metric: took 38.410228374s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:17:51.181171 1464250 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:17:51.186197 1464250 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-230628" cluster and "default" namespace by default
	W1002 22:17:47.997091 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:50.497007 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:52.497079 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:54.997340 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:57.496730 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	I1002 22:17:58.496864 1461647 node_ready.go:49] node "embed-certs-080134" is "Ready"
	I1002 22:17:58.496899 1461647 node_ready.go:38] duration metric: took 40.003354073s for node "embed-certs-080134" to be "Ready" ...
	I1002 22:17:58.496917 1461647 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:17:58.496981 1461647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:17:58.517204 1461647 api_server.go:72] duration metric: took 42.056630926s to wait for apiserver process to appear ...
	I1002 22:17:58.517227 1461647 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:17:58.517246 1461647 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:17:58.526908 1461647 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:17:58.528262 1461647 api_server.go:141] control plane version: v1.34.1
	I1002 22:17:58.528349 1461647 api_server.go:131] duration metric: took 11.114049ms to wait for apiserver health ...
	I1002 22:17:58.528374 1461647 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:17:58.532518 1461647 system_pods.go:59] 8 kube-system pods found
	I1002 22:17:58.532575 1461647 system_pods.go:61] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:58.532582 1461647 system_pods.go:61] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:58.532589 1461647 system_pods.go:61] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:58.532593 1461647 system_pods.go:61] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:58.532602 1461647 system_pods.go:61] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:58.532606 1461647 system_pods.go:61] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:58.532616 1461647 system_pods.go:61] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:58.532622 1461647 system_pods.go:61] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:58.532630 1461647 system_pods.go:74] duration metric: took 4.23825ms to wait for pod list to return data ...
	I1002 22:17:58.532649 1461647 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:17:58.537529 1461647 default_sa.go:45] found service account: "default"
	I1002 22:17:58.537556 1461647 default_sa.go:55] duration metric: took 4.901081ms for default service account to be created ...
	I1002 22:17:58.537566 1461647 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:17:58.637591 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:58.637621 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:58.637628 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:58.637634 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:58.637638 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:58.637644 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:58.637648 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:58.637651 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:58.637657 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:58.637676 1461647 retry.go:31] will retry after 276.140742ms: missing components: kube-dns
	I1002 22:17:58.918387 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:58.918423 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:58.918431 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:58.918439 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:58.918444 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:58.918449 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:58.918453 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:58.918458 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:58.918465 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:58.918482 1461647 retry.go:31] will retry after 317.04108ms: missing components: kube-dns
	I1002 22:17:59.238846 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:59.238886 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:59.238893 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:59.238900 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:59.238904 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:59.238909 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:59.238913 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:59.238917 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:59.238923 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:59.238942 1461647 retry.go:31] will retry after 307.274217ms: missing components: kube-dns
	I1002 22:17:59.549838 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:59.549873 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Running
	I1002 22:17:59.549880 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:59.549887 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:59.549892 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:59.549897 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:59.549900 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:59.549904 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:59.549908 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Running
	I1002 22:17:59.549916 1461647 system_pods.go:126] duration metric: took 1.012345208s to wait for k8s-apps to be running ...
	I1002 22:17:59.549928 1461647 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:17:59.549993 1461647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:17:59.563622 1461647 system_svc.go:56] duration metric: took 13.684817ms WaitForService to wait for kubelet
	I1002 22:17:59.563650 1461647 kubeadm.go:586] duration metric: took 43.103081582s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:17:59.563668 1461647 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:17:59.566851 1461647 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:17:59.566886 1461647 node_conditions.go:123] node cpu capacity is 2
	I1002 22:17:59.566901 1461647 node_conditions.go:105] duration metric: took 3.227388ms to run NodePressure ...
	I1002 22:17:59.566913 1461647 start.go:241] waiting for startup goroutines ...
	I1002 22:17:59.566920 1461647 start.go:246] waiting for cluster config update ...
	I1002 22:17:59.566931 1461647 start.go:255] writing updated cluster config ...
	I1002 22:17:59.567215 1461647 ssh_runner.go:195] Run: rm -f paused
	I1002 22:17:59.571164 1461647 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:17:59.574876 1461647 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.580778 1461647 pod_ready.go:94] pod "coredns-66bc5c9577-n47rb" is "Ready"
	I1002 22:17:59.580806 1461647 pod_ready.go:86] duration metric: took 5.902795ms for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.583585 1461647 pod_ready.go:83] waiting for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.588251 1461647 pod_ready.go:94] pod "etcd-embed-certs-080134" is "Ready"
	I1002 22:17:59.588278 1461647 pod_ready.go:86] duration metric: took 4.667182ms for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.590708 1461647 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.595294 1461647 pod_ready.go:94] pod "kube-apiserver-embed-certs-080134" is "Ready"
	I1002 22:17:59.595357 1461647 pod_ready.go:86] duration metric: took 4.622867ms for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.597593 1461647 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.975568 1461647 pod_ready.go:94] pod "kube-controller-manager-embed-certs-080134" is "Ready"
	I1002 22:17:59.975595 1461647 pod_ready.go:86] duration metric: took 377.978017ms for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:00.177466 1461647 pod_ready.go:83] waiting for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:00.575634 1461647 pod_ready.go:94] pod "kube-proxy-7lq28" is "Ready"
	I1002 22:18:00.575741 1461647 pod_ready.go:86] duration metric: took 398.246371ms for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:00.776410 1461647 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:01.176131 1461647 pod_ready.go:94] pod "kube-scheduler-embed-certs-080134" is "Ready"
	I1002 22:18:01.176165 1461647 pod_ready.go:86] duration metric: took 399.727181ms for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:01.176180 1461647 pod_ready.go:40] duration metric: took 1.604978311s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:18:01.231027 1461647 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:18:01.236820 1461647 out.go:179] * Done! kubectl is now configured to use "embed-certs-080134" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.627316935Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.631767554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.631799193Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.631822405Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.634978574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.635012747Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.635034819Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.638111621Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.638141069Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.63816282Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.641187094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.641218798Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.362826815Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d50aa584-26db-43d7-a68f-8901d43e1760 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.364759449Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8fd5d5f8-090a-4b9b-90b5-cf9a31278b9a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.367317565Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97/dashboard-metrics-scraper" id=ab67223c-4455-438a-812d-aa7c42603464 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.367620222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.380370344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.381924401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.41398863Z" level=info msg="Created container 7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97/dashboard-metrics-scraper" id=ab67223c-4455-438a-812d-aa7c42603464 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.415734067Z" level=info msg="Starting container: 7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137" id=b180cabd-2dd8-4069-b475-95e86d9454a6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.421276893Z" level=info msg="Started container" PID=1712 containerID=7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97/dashboard-metrics-scraper id=b180cabd-2dd8-4069-b475-95e86d9454a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ef238c47f38763a37f651b8f2950f6b7cedc20deecd96f96e0dd43b30d77c08
	Oct 02 22:18:00 default-k8s-diff-port-230628 conmon[1709]: conmon 7966756c2dbde4caab40 <ninfo>: container 1712 exited with status 1
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.757724462Z" level=info msg="Removing container: 72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00" id=85df246f-acad-4467-bceb-d9026339b314 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.764682508Z" level=info msg="Error loading conmon cgroup of container 72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00: cgroup deleted" id=85df246f-acad-4467-bceb-d9026339b314 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.768032265Z" level=info msg="Removed container 72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97/dashboard-metrics-scraper" id=85df246f-acad-4467-bceb-d9026339b314 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7966756c2dbde       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago        Exited              dashboard-metrics-scraper   3                   7ef238c47f387       dashboard-metrics-scraper-6ffb444bf9-x8x97             kubernetes-dashboard
	a7993422fac72       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   01558f0fa4734       storage-provisioner                                    kube-system
	cc4ee39a94830       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   60a4d9611d04a       kubernetes-dashboard-855c9754f9-p8jr6                  kubernetes-dashboard
	da714324f023d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   394c4a97ce874       busybox                                                default
	fee643046efa5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   fc82e120ce2b0       coredns-66bc5c9577-jvqks                               kube-system
	809868061d07f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   2df55607a758c       kindnet-lvsjr                                          kube-system
	cc20e87d8de3a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   01558f0fa4734       storage-provisioner                                    kube-system
	c64d369cac403       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   ec255bf28850e       kube-proxy-4l9vx                                       kube-system
	01235239b3d7b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   af739bcd83de7       kube-apiserver-default-k8s-diff-port-230628            kube-system
	f86d294e2252f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   a8532827a39c1       kube-scheduler-default-k8s-diff-port-230628            kube-system
	d628dbd9a32a7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1f003e04b4e94       etcd-default-k8s-diff-port-230628                      kube-system
	0ad0bf20345ab       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   a15196cfdd9b2       kube-controller-manager-default-k8s-diff-port-230628   kube-system
	
	
	==> coredns [fee643046efa55d27a185fe7948708690f63a1b1ed6d6f57c5ba3929605c01f6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57874 - 2420 "HINFO IN 233656523258483718.1371668010258019426. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.043060957s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-230628
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-230628
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=default-k8s-diff-port-230628
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_15_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:15:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-230628
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:18:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:17:41 +0000   Thu, 02 Oct 2025 22:15:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:17:41 +0000   Thu, 02 Oct 2025 22:15:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:17:41 +0000   Thu, 02 Oct 2025 22:15:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:17:41 +0000   Thu, 02 Oct 2025 22:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-230628
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb3f0d65bf5f4e71ad6317cfad520713
	  System UUID:                6317d78e-133b-4770-8f0b-21b4d8e9ee44
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-jvqks                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m24s
	  kube-system                 etcd-default-k8s-diff-port-230628                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-lvsjr                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-230628             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-230628    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-4l9vx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-230628             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-x8x97              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-p8jr6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m29s                  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m29s                  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m29s                  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m25s                  node-controller  Node default-k8s-diff-port-230628 event: Registered Node default-k8s-diff-port-230628 in Controller
	  Normal   NodeReady                102s                   kubelet          Node default-k8s-diff-port-230628 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node default-k8s-diff-port-230628 event: Registered Node default-k8s-diff-port-230628 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d628dbd9a32a7daf04eca0db048fc51c27a1ad8a1f6949519b2d129d15e23abf] <==
	{"level":"warn","ts":"2025-10-02T22:17:07.421208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.431521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.474409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.522828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.566962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.623150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.647455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.680848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.722440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.769234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.798269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.828376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.843088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.870278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.917584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.943994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.959791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.013189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.036936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.063849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.089450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.133697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.165319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.207696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.369585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43456","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:18:06 up  7:00,  0 user,  load average: 2.70, 2.55, 2.17
	Linux default-k8s-diff-port-230628 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [809868061d07f29dabb55aa5c484c5f78ef5dec1d1e5ea99ae41f8744323b963] <==
	I1002 22:17:11.403426       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:17:11.403682       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 22:17:11.403823       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:17:11.403834       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:17:11.403847       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:17:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:17:11.630322       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:17:11.634597       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:17:11.634716       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:17:11.635468       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:17:41.625412       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:17:41.631024       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 22:17:41.635583       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:17:41.636647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 22:17:42.935572       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:17:42.935618       1 metrics.go:72] Registering metrics
	I1002 22:17:42.935676       1 controller.go:711] "Syncing nftables rules"
	I1002 22:17:51.626148       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:17:51.626222       1 main.go:301] handling current node
	I1002 22:18:01.634137       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:18:01.634171       1 main.go:301] handling current node
	
	
	==> kube-apiserver [01235239b3d7b66144bf8695871cb046f63119cbc8df4573140b6f6c6e591533] <==
	I1002 22:17:09.966491       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:17:09.973878       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:17:09.977670       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 22:17:10.026741       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:17:10.026984       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:17:10.057553       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 22:17:10.057617       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:17:10.076635       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:17:10.076657       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:17:10.076798       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 22:17:10.077896       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 22:17:10.077933       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 22:17:10.112875       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1002 22:17:10.233979       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 22:17:10.321567       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:17:10.356353       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:17:12.073957       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 22:17:12.313118       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:17:12.386862       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:17:12.421116       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:17:12.550109       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.129.85"}
	I1002 22:17:12.621124       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.184.135"}
	I1002 22:17:14.196675       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 22:17:14.249042       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 22:17:14.595959       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0ad0bf20345abfcc25bcf338fba89dbe33e5f1b70801eda12faa774271ccb6cc] <==
	I1002 22:17:14.141590       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 22:17:14.141124       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:17:14.140102       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 22:17:14.141157       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 22:17:14.146678       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 22:17:14.146894       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:17:14.151447       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:17:14.151735       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 22:17:14.154172       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:17:14.154325       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 22:17:14.155366       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 22:17:14.156677       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:17:14.159965       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 22:17:14.160086       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 22:17:14.160184       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-230628"
	I1002 22:17:14.160257       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 22:17:14.163914       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 22:17:14.190884       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 22:17:14.191041       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 22:17:14.191084       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 22:17:14.198651       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:17:14.199803       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 22:17:14.203965       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 22:17:14.206804       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 22:17:14.209141       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [c64d369cac4035241b589e76349999be9beaef376d813cab8178ed3ae1854b68] <==
	I1002 22:17:11.809175       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:17:12.203575       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:17:12.326654       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:17:12.326703       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 22:17:12.326771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:17:12.436387       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:17:12.436525       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:17:12.476767       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:17:12.477167       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:17:12.477436       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:17:12.485846       1 config.go:200] "Starting service config controller"
	I1002 22:17:12.485919       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:17:12.485937       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:17:12.485941       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:17:12.485952       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:17:12.485957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:17:12.489171       1 config.go:309] "Starting node config controller"
	I1002 22:17:12.489252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:17:12.489282       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:17:12.587346       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:17:12.591728       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:17:12.591766       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f86d294e2252fadd80d85a001b6e4a46abde9a9392fd88c77dde413f0ee440ad] <==
	I1002 22:17:08.202319       1 serving.go:386] Generated self-signed cert in-memory
	I1002 22:17:10.797112       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 22:17:10.797150       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:17:10.935742       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:17:10.935880       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 22:17:10.935910       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 22:17:10.935931       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 22:17:10.967557       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:17:10.967580       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:17:10.967600       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:17:10.967607       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:17:11.049345       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 22:17:11.080412       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:17:11.080484       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:17:15 default-k8s-diff-port-230628 kubelet[777]: W1002 22:17:15.113189     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/crio-60a4d9611d04a3a7a8c87e44a7f86b2ffa95a88d1b9de8076429e7aafd77d50c WatchSource:0}: Error finding container 60a4d9611d04a3a7a8c87e44a7f86b2ffa95a88d1b9de8076429e7aafd77d50c: Status 404 returned error can't find the container with id 60a4d9611d04a3a7a8c87e44a7f86b2ffa95a88d1b9de8076429e7aafd77d50c
	Oct 02 22:17:19 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:19.363609     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 22:17:20 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:20.640054     777 scope.go:117] "RemoveContainer" containerID="e879b43716f0d3fbb6ba379088b7a054c3a0de637fd144003a8599673acde384"
	Oct 02 22:17:21 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:21.649808     777 scope.go:117] "RemoveContainer" containerID="e879b43716f0d3fbb6ba379088b7a054c3a0de637fd144003a8599673acde384"
	Oct 02 22:17:21 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:21.649920     777 scope.go:117] "RemoveContainer" containerID="2cbe553ef03dd4e294f8ef8fe5a20c0fcf2f3401de3821dae52995e1ebd25560"
	Oct 02 22:17:21 default-k8s-diff-port-230628 kubelet[777]: E1002 22:17:21.653521     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:17:22 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:22.653743     777 scope.go:117] "RemoveContainer" containerID="2cbe553ef03dd4e294f8ef8fe5a20c0fcf2f3401de3821dae52995e1ebd25560"
	Oct 02 22:17:22 default-k8s-diff-port-230628 kubelet[777]: E1002 22:17:22.653880     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:17:25 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:25.054419     777 scope.go:117] "RemoveContainer" containerID="2cbe553ef03dd4e294f8ef8fe5a20c0fcf2f3401de3821dae52995e1ebd25560"
	Oct 02 22:17:25 default-k8s-diff-port-230628 kubelet[777]: E1002 22:17:25.054589     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:17:36 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:36.361007     777 scope.go:117] "RemoveContainer" containerID="2cbe553ef03dd4e294f8ef8fe5a20c0fcf2f3401de3821dae52995e1ebd25560"
	Oct 02 22:17:36 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:36.694204     777 scope.go:117] "RemoveContainer" containerID="2cbe553ef03dd4e294f8ef8fe5a20c0fcf2f3401de3821dae52995e1ebd25560"
	Oct 02 22:17:37 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:37.698482     777 scope.go:117] "RemoveContainer" containerID="72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00"
	Oct 02 22:17:37 default-k8s-diff-port-230628 kubelet[777]: E1002 22:17:37.698642     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:17:37 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:37.712124     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p8jr6" podStartSLOduration=13.340211897 podStartE2EDuration="23.712106258s" podCreationTimestamp="2025-10-02 22:17:14 +0000 UTC" firstStartedPulling="2025-10-02 22:17:15.116018265 +0000 UTC m=+14.232334882" lastFinishedPulling="2025-10-02 22:17:25.487912626 +0000 UTC m=+24.604229243" observedRunningTime="2025-10-02 22:17:25.691999722 +0000 UTC m=+24.808316355" watchObservedRunningTime="2025-10-02 22:17:37.712106258 +0000 UTC m=+36.828422875"
	Oct 02 22:17:41 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:41.709270     777 scope.go:117] "RemoveContainer" containerID="cc20e87d8de3a0fdc4fed95d57eccb1fc685cc5777882f10e8b13feea7ad8e51"
	Oct 02 22:17:45 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:45.054128     777 scope.go:117] "RemoveContainer" containerID="72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00"
	Oct 02 22:17:45 default-k8s-diff-port-230628 kubelet[777]: E1002 22:17:45.054381     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:18:00 default-k8s-diff-port-230628 kubelet[777]: I1002 22:18:00.361654     777 scope.go:117] "RemoveContainer" containerID="72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00"
	Oct 02 22:18:00 default-k8s-diff-port-230628 kubelet[777]: I1002 22:18:00.756429     777 scope.go:117] "RemoveContainer" containerID="72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00"
	Oct 02 22:18:01 default-k8s-diff-port-230628 kubelet[777]: I1002 22:18:01.761120     777 scope.go:117] "RemoveContainer" containerID="7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137"
	Oct 02 22:18:01 default-k8s-diff-port-230628 kubelet[777]: E1002 22:18:01.761288     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:18:03 default-k8s-diff-port-230628 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:18:03 default-k8s-diff-port-230628 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:18:03 default-k8s-diff-port-230628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cc4ee39a94830173206b7e8710c573909fde2513242b876906c16d6dff3d2c4c] <==
	2025/10/02 22:17:25 Using namespace: kubernetes-dashboard
	2025/10/02 22:17:25 Using in-cluster config to connect to apiserver
	2025/10/02 22:17:25 Using secret token for csrf signing
	2025/10/02 22:17:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 22:17:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 22:17:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 22:17:25 Generating JWE encryption key
	2025/10/02 22:17:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 22:17:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 22:17:26 Initializing JWE encryption key from synchronized object
	2025/10/02 22:17:26 Creating in-cluster Sidecar client
	2025/10/02 22:17:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:17:26 Serving insecurely on HTTP port: 9090
	2025/10/02 22:17:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:17:25 Starting overwatch
	
	
	==> storage-provisioner [a7993422fac72852a98c0079ccc3410fc8956c0994cbb0968feb6358a834dc24] <==
	I1002 22:17:41.760460       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 22:17:41.775905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:17:41.776031       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 22:17:41.778489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:45.234660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:49.497288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:53.098216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:56.152106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:59.174187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:59.179385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:17:59.179548       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:17:59.179755       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-230628_ad20342c-abe7-4f6b-bda8-c92f988bdfa7!
	I1002 22:17:59.180236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40ee210b-5507-4164-84d5-3e6947882443", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-230628_ad20342c-abe7-4f6b-bda8-c92f988bdfa7 became leader
	W1002 22:17:59.188049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:59.191305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:17:59.280818       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-230628_ad20342c-abe7-4f6b-bda8-c92f988bdfa7!
	W1002 22:18:01.196274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:01.206239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:03.209794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:03.215679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:05.222346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:05.229912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cc20e87d8de3a0fdc4fed95d57eccb1fc685cc5777882f10e8b13feea7ad8e51] <==
	I1002 22:17:11.660577       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 22:17:41.662338       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628: exit status 2 (378.854674ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-230628 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-230628
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-230628:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef",
	        "Created": "2025-10-02T22:15:06.94474657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1464385,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:16:53.185365109Z",
	            "FinishedAt": "2025-10-02T22:16:52.118863211Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/hosts",
	        "LogPath": "/var/lib/docker/containers/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef-json.log",
	        "Name": "/default-k8s-diff-port-230628",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-230628:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-230628",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef",
	                "LowerDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f5c628ca79750e3d995a1b81f43e4cf8497558233c931f34a8c6108eea62d466/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-230628",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-230628/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-230628",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-230628",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-230628",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89debf29f1e53388f53ce6ee9dba0cfaad5ed694d7fddddbcaf246a5ce7571ac",
	            "SandboxKey": "/var/run/docker/netns/89debf29f1e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34571"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34572"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34575"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34573"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34574"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-230628": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:a0:59:4c:5b:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b0ce512013df0626d99cabbb56683ffeecfa8da9b150b56cbd6d68363d36b91b",
	                    "EndpointID": "f7cd068edcb68bacff0d9567761c7f7d81467d3fafeb93b5cfebebea9a1d55a3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-230628",
	                        "75dade69ea95"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628: exit status 2 (381.58695ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-230628 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-230628 logs -n 25: (1.341400717s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:10 UTC │ 02 Oct 25 22:11 UTC │
	│ delete  │ -p force-systemd-env-915858                                                                                                                                                                                                                   │ force-systemd-env-915858     │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p cert-options-280401 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ cert-options-280401 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ -p cert-options-280401 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ delete  │ -p cert-options-280401                                                                                                                                                                                                                        │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:14 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-173127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │                     │
	│ stop    │ -p old-k8s-version-173127 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ delete  │ -p cert-expiration-247949                                                                                                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-173127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ image   │ old-k8s-version-173127 image list --format=json                                                                                                                                                                                               │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ pause   │ -p old-k8s-version-173127 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-230628 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-230628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-230628 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ pause   │ -p default-k8s-diff-port-230628 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:16:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:16:52.815690 1464250 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:16:52.815816 1464250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:16:52.815822 1464250 out.go:374] Setting ErrFile to fd 2...
	I1002 22:16:52.815827 1464250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:16:52.816195 1464250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:16:52.816652 1464250 out.go:368] Setting JSON to false
	I1002 22:16:52.817603 1464250 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25138,"bootTime":1759418275,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:16:52.817703 1464250 start.go:140] virtualization:  
	I1002 22:16:52.820985 1464250 out.go:179] * [default-k8s-diff-port-230628] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:16:52.825015 1464250 notify.go:220] Checking for updates...
	I1002 22:16:52.825905 1464250 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:16:52.829013 1464250 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:16:52.832208 1464250 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:16:52.835173 1464250 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:16:52.838076 1464250 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:16:52.841779 1464250 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:16:52.845265 1464250 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:16:52.845844 1464250 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:16:52.885245 1464250 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:16:52.885452 1464250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:16:52.979074 1464250 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 22:16:52.969867837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:16:52.979176 1464250 docker.go:318] overlay module found
	I1002 22:16:52.982431 1464250 out.go:179] * Using the docker driver based on existing profile
	I1002 22:16:52.985315 1464250 start.go:304] selected driver: docker
	I1002 22:16:52.985335 1464250 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:16:52.985449 1464250 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:16:52.986258 1464250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:16:53.084126 1464250 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 22:16:53.06918457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:16:53.084467 1464250 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:16:53.084504 1464250 cni.go:84] Creating CNI manager for ""
	I1002 22:16:53.084559 1464250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:16:53.084591 1464250 start.go:348] cluster config:
	{Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:16:53.087838 1464250 out.go:179] * Starting "default-k8s-diff-port-230628" primary control-plane node in "default-k8s-diff-port-230628" cluster
	I1002 22:16:53.090692 1464250 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:16:53.093491 1464250 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:16:53.096271 1464250 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:16:53.096326 1464250 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:16:53.096335 1464250 cache.go:58] Caching tarball of preloaded images
	I1002 22:16:53.096433 1464250 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:16:53.096443 1464250 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:16:53.096554 1464250 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/config.json ...
	I1002 22:16:53.096778 1464250 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:16:53.123538 1464250 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:16:53.123557 1464250 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:16:53.123580 1464250 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:16:53.123605 1464250 start.go:360] acquireMachinesLock for default-k8s-diff-port-230628: {Name:mk03e8992f46bd2d7f7874118d4f399e26ab9e4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:16:53.123670 1464250 start.go:364] duration metric: took 47.04µs to acquireMachinesLock for "default-k8s-diff-port-230628"
	I1002 22:16:53.123691 1464250 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:16:53.123704 1464250 fix.go:54] fixHost starting: 
	I1002 22:16:53.123992 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:16:53.144033 1464250 fix.go:112] recreateIfNeeded on default-k8s-diff-port-230628: state=Stopped err=<nil>
	W1002 22:16:53.144061 1464250 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:16:53.494318 1461647 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:16:53.495105 1461647 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:16:54.209761 1461647 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:16:54.898068 1461647 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:16:55.163488 1461647 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:16:56.207823 1461647 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:16:56.417222 1461647 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:16:56.418360 1461647 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:16:56.426417 1461647 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:16:56.430091 1461647 out.go:252]   - Booting up control plane ...
	I1002 22:16:56.430209 1461647 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:16:56.430302 1461647 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:16:56.430374 1461647 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:16:56.449134 1461647 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:16:56.449255 1461647 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:16:56.458304 1461647 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:16:56.458417 1461647 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:16:56.458470 1461647 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:16:56.582476 1461647 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:16:56.582603 1461647 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:16:53.147360 1464250 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-230628" ...
	I1002 22:16:53.147462 1464250 cli_runner.go:164] Run: docker start default-k8s-diff-port-230628
	I1002 22:16:53.486318 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:16:53.511179 1464250 kic.go:430] container "default-k8s-diff-port-230628" state is running.
	I1002 22:16:53.511595 1464250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:16:53.540322 1464250 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/config.json ...
	I1002 22:16:53.540565 1464250 machine.go:93] provisionDockerMachine start ...
	I1002 22:16:53.540629 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:53.564571 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:53.564883 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:53.564892 1464250 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:16:53.570095 1464250 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:16:56.722513 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-230628
	
	I1002 22:16:56.722536 1464250 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-230628"
	I1002 22:16:56.722616 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:56.745126 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:56.745433 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:56.745450 1464250 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-230628 && echo "default-k8s-diff-port-230628" | sudo tee /etc/hostname
	I1002 22:16:56.909371 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-230628
	
	I1002 22:16:56.909492 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:56.935785 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:56.936109 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:56.936127 1464250 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-230628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-230628/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-230628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:16:57.078389 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:16:57.078416 1464250 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:16:57.078452 1464250 ubuntu.go:190] setting up certificates
	I1002 22:16:57.078464 1464250 provision.go:84] configureAuth start
	I1002 22:16:57.078528 1464250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:16:57.103962 1464250 provision.go:143] copyHostCerts
	I1002 22:16:57.104043 1464250 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:16:57.104067 1464250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:16:57.104157 1464250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:16:57.104282 1464250 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:16:57.104293 1464250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:16:57.104323 1464250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:16:57.104417 1464250 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:16:57.104435 1464250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:16:57.104470 1464250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:16:57.104542 1464250 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-230628 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-230628 localhost minikube]
	I1002 22:16:57.173268 1464250 provision.go:177] copyRemoteCerts
	I1002 22:16:57.173348 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:16:57.173401 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.197458 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:57.310188 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:16:57.337815 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 22:16:57.363629 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:16:57.393898 1464250 provision.go:87] duration metric: took 315.415822ms to configureAuth
	I1002 22:16:57.393966 1464250 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:16:57.394255 1464250 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:16:57.394409 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.419379 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:57.419762 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:57.419788 1464250 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:16:57.747609 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:16:57.747631 1464250 machine.go:96] duration metric: took 4.207056415s to provisionDockerMachine
	I1002 22:16:57.747678 1464250 start.go:293] postStartSetup for "default-k8s-diff-port-230628" (driver="docker")
	I1002 22:16:57.747689 1464250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:16:57.747749 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:16:57.747799 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.767729 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:57.866276 1464250 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:16:57.869924 1464250 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:16:57.869952 1464250 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:16:57.869963 1464250 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:16:57.870021 1464250 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:16:57.870125 1464250 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:16:57.870234 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:16:57.877904 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:16:57.896072 1464250 start.go:296] duration metric: took 148.377821ms for postStartSetup
	I1002 22:16:57.896224 1464250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:16:57.896319 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.913370 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:58.011325 1464250 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:16:58.017811 1464250 fix.go:56] duration metric: took 4.894105584s for fixHost
	I1002 22:16:58.017838 1464250 start.go:83] releasing machines lock for "default-k8s-diff-port-230628", held for 4.894158063s
	I1002 22:16:58.017916 1464250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:16:58.043658 1464250 ssh_runner.go:195] Run: cat /version.json
	I1002 22:16:58.043745 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:58.044077 1464250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:16:58.044149 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:58.085645 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:58.093817 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:58.328980 1464250 ssh_runner.go:195] Run: systemctl --version
	I1002 22:16:58.338589 1464250 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:16:58.403158 1464250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:16:58.410450 1464250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:16:58.410529 1464250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:16:58.420452 1464250 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:16:58.420472 1464250 start.go:495] detecting cgroup driver to use...
	I1002 22:16:58.420503 1464250 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:16:58.420550 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:16:58.441159 1464250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:16:58.469385 1464250 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:16:58.469532 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:16:58.489039 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:16:58.514417 1464250 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:16:58.729270 1464250 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:16:58.899442 1464250 docker.go:234] disabling docker service ...
	I1002 22:16:58.899528 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:16:58.927300 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:16:58.956494 1464250 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:16:59.153935 1464250 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:16:59.380952 1464250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:16:59.404925 1464250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:16:59.428535 1464250 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:16:59.428602 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.443195 1464250 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:16:59.443275 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.453072 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.470762 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.480680 1464250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:16:59.495366 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.511397 1464250 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.525950 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.542756 1464250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:16:59.555536 1464250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:16:59.567342 1464250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:16:59.777346 1464250 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:16:59.996294 1464250 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:16:59.996443 1464250 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:17:00.000992 1464250 start.go:563] Will wait 60s for crictl version
	I1002 22:17:00.001101 1464250 ssh_runner.go:195] Run: which crictl
	I1002 22:17:00.006874 1464250 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:17:00.049873 1464250 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:17:00.049989 1464250 ssh_runner.go:195] Run: crio --version
	I1002 22:17:00.113645 1464250 ssh_runner.go:195] Run: crio --version
	I1002 22:17:00.192200 1464250 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:16:57.582618 1461647 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001518043s
	I1002 22:16:57.586746 1461647 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:16:57.586851 1461647 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 22:16:57.586982 1461647 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:16:57.587073 1461647 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:17:00.195337 1464250 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-230628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:17:00.224465 1464250 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:17:00.234568 1464250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:17:00.266674 1464250 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:17:00.266810 1464250 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:17:00.266880 1464250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:17:00.354979 1464250 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:17:00.355085 1464250 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:17:00.355202 1464250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:17:00.405746 1464250 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:17:00.405772 1464250 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:17:00.405795 1464250 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1002 22:17:00.405906 1464250 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-230628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:17:00.405997 1464250 ssh_runner.go:195] Run: crio config
	I1002 22:17:00.513755 1464250 cni.go:84] Creating CNI manager for ""
	I1002 22:17:00.513782 1464250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:17:00.513809 1464250 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:17:00.513840 1464250 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-230628 NodeName:default-k8s-diff-port-230628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:17:00.513999 1464250 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-230628"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:17:00.514105 1464250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:17:00.526167 1464250 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:17:00.526268 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:17:00.538316 1464250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 22:17:00.561096 1464250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:17:00.587710 1464250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1002 22:17:00.616497 1464250 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:17:00.626507 1464250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:17:00.645309 1464250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:17:00.849459 1464250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:17:00.879244 1464250 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628 for IP: 192.168.76.2
	I1002 22:17:00.879319 1464250 certs.go:195] generating shared ca certs ...
	I1002 22:17:00.879364 1464250 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:00.879591 1464250 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:17:00.879694 1464250 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:17:00.879734 1464250 certs.go:257] generating profile certs ...
	I1002 22:17:00.879888 1464250 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.key
	I1002 22:17:00.880000 1464250 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595
	I1002 22:17:00.880084 1464250 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key
	I1002 22:17:00.880249 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:17:00.880327 1464250 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:17:00.880373 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:17:00.880479 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:17:00.880557 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:17:00.880606 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:17:00.880718 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:17:00.881770 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:17:00.946651 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:17:01.000213 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:17:01.078566 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:17:01.166792 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 22:17:01.226124 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:17:01.295374 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:17:01.350643 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:17:01.399862 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:17:01.436700 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:17:01.493113 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:17:01.539726 1464250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:17:01.564705 1464250 ssh_runner.go:195] Run: openssl version
	I1002 22:17:01.575769 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:17:01.588296 1464250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:17:01.592905 1464250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:17:01.593075 1464250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:17:01.662785 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:17:01.675327 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:17:01.699302 1464250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:17:01.703743 1464250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:17:01.703887 1464250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:17:01.778073 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:17:01.802386 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:17:01.832716 1464250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:17:01.837808 1464250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:17:01.837956 1464250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:17:01.957141 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:17:01.975541 1464250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:17:01.983041 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:17:02.120630 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:17:02.356670 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:17:02.597674 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:17:02.719438 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:17:02.835972 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:17:02.935511 1464250 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:17:02.935685 1464250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:17:02.935805 1464250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:17:03.063735 1464250 cri.go:89] found id: "01235239b3d7b66144bf8695871cb046f63119cbc8df4573140b6f6c6e591533"
	I1002 22:17:03.063817 1464250 cri.go:89] found id: "f86d294e2252fadd80d85a001b6e4a46abde9a9392fd88c77dde413f0ee440ad"
	I1002 22:17:03.063838 1464250 cri.go:89] found id: "d628dbd9a32a7daf04eca0db048fc51c27a1ad8a1f6949519b2d129d15e23abf"
	I1002 22:17:03.063863 1464250 cri.go:89] found id: "0ad0bf20345abfcc25bcf338fba89dbe33e5f1b70801eda12faa774271ccb6cc"
	I1002 22:17:03.063903 1464250 cri.go:89] found id: ""
	I1002 22:17:03.064013 1464250 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:17:03.102669 1464250 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:17:03Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:17:03.102852 1464250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:17:03.138162 1464250 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:17:03.138183 1464250 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:17:03.138242 1464250 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:17:03.154641 1464250 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:17:03.155232 1464250 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-230628" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:17:03.155450 1464250 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-230628" cluster setting kubeconfig missing "default-k8s-diff-port-230628" context setting]
	I1002 22:17:03.155902 1464250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:03.157939 1464250 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:17:03.191443 1464250 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 22:17:03.191529 1464250 kubeadm.go:601] duration metric: took 53.339576ms to restartPrimaryControlPlane
	I1002 22:17:03.191564 1464250 kubeadm.go:402] duration metric: took 256.062788ms to StartCluster
	I1002 22:17:03.191608 1464250 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:03.191760 1464250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:17:03.192606 1464250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:03.193181 1464250 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:17:03.193294 1464250 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:17:03.193376 1464250 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-230628"
	I1002 22:17:03.193392 1464250 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-230628"
	W1002 22:17:03.193399 1464250 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:17:03.193421 1464250 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:17:03.193952 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.193267 1464250 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:17:03.194791 1464250 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-230628"
	I1002 22:17:03.194824 1464250 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-230628"
	W1002 22:17:03.194832 1464250 addons.go:247] addon dashboard should already be in state true
	I1002 22:17:03.194858 1464250 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:17:03.195313 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.195535 1464250 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-230628"
	I1002 22:17:03.195558 1464250 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-230628"
	I1002 22:17:03.195827 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.211810 1464250 out.go:179] * Verifying Kubernetes components...
	I1002 22:17:03.215190 1464250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:17:03.256729 1464250 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:17:03.262848 1464250 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-230628"
	W1002 22:17:03.262873 1464250 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:17:03.262900 1464250 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:17:03.263503 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.264020 1464250 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:17:03.266167 1464250 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:03.266192 1464250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:17:03.266266 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:17:03.269780 1464250 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 22:17:04.442415 1461647 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.851469395s
	I1002 22:17:03.272664 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:17:03.272693 1464250 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:17:03.272773 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:17:03.315483 1464250 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:03.315505 1464250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:17:03.315578 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:17:03.321039 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:17:03.346244 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:17:03.364512 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:17:03.807998 1464250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:03.826796 1464250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:17:03.854664 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:17:03.854685 1464250 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:17:03.944157 1464250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:04.013117 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:17:04.013140 1464250 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:17:04.123747 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:17:04.123823 1464250 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:17:04.288279 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:17:04.288354 1464250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:17:04.364870 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:17:04.364959 1464250 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:17:04.407691 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:17:04.407764 1464250 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:17:04.480556 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:17:04.480585 1464250 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:17:04.521232 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:17:04.521258 1464250 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:17:04.559450 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:17:04.559476 1464250 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:17:04.605034 1464250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:17:07.541309 1461647 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.953707302s
	I1002 22:17:09.588432 1461647 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 12.001620033s
	I1002 22:17:09.621336 1461647 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 22:17:09.639794 1461647 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 22:17:09.670650 1461647 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 22:17:09.671131 1461647 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-080134 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 22:17:09.705280 1461647 kubeadm.go:318] [bootstrap-token] Using token: n45vfv.yum1oz8wyqc2j4g1
	I1002 22:17:09.708338 1461647 out.go:252]   - Configuring RBAC rules ...
	I1002 22:17:09.712243 1461647 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 22:17:09.740982 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 22:17:09.759389 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 22:17:09.766849 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 22:17:09.771823 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 22:17:09.779533 1461647 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 22:17:09.996111 1461647 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 22:17:10.584421 1461647 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 22:17:11.003554 1461647 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 22:17:11.005488 1461647 kubeadm.go:318] 
	I1002 22:17:11.005580 1461647 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 22:17:11.005594 1461647 kubeadm.go:318] 
	I1002 22:17:11.005676 1461647 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 22:17:11.005686 1461647 kubeadm.go:318] 
	I1002 22:17:11.005712 1461647 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 22:17:11.008239 1461647 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 22:17:11.008309 1461647 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 22:17:11.008324 1461647 kubeadm.go:318] 
	I1002 22:17:11.008379 1461647 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 22:17:11.008389 1461647 kubeadm.go:318] 
	I1002 22:17:11.008436 1461647 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 22:17:11.008445 1461647 kubeadm.go:318] 
	I1002 22:17:11.008497 1461647 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 22:17:11.008576 1461647 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 22:17:11.008648 1461647 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 22:17:11.008658 1461647 kubeadm.go:318] 
	I1002 22:17:11.008769 1461647 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 22:17:11.008849 1461647 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 22:17:11.008860 1461647 kubeadm.go:318] 
	I1002 22:17:11.008943 1461647 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token n45vfv.yum1oz8wyqc2j4g1 \
	I1002 22:17:11.009049 1461647 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 22:17:11.009073 1461647 kubeadm.go:318] 	--control-plane 
	I1002 22:17:11.009083 1461647 kubeadm.go:318] 
	I1002 22:17:11.009186 1461647 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 22:17:11.009198 1461647 kubeadm.go:318] 
	I1002 22:17:11.009279 1461647 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token n45vfv.yum1oz8wyqc2j4g1 \
	I1002 22:17:11.009383 1461647 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 22:17:11.023465 1461647 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:17:11.023726 1461647 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:17:11.023853 1461647 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:17:11.023875 1461647 cni.go:84] Creating CNI manager for ""
	I1002 22:17:11.023882 1461647 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:17:11.027244 1461647 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 22:17:11.030352 1461647 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:17:11.035499 1461647 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 22:17:11.035518 1461647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 22:17:11.081114 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:17:11.871876 1461647 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:17:11.872009 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:11.872082 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-080134 minikube.k8s.io/updated_at=2025_10_02T22_17_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=embed-certs-080134 minikube.k8s.io/primary=true
	I1002 22:17:12.165350 1461647 ops.go:34] apiserver oom_adj: -16
	I1002 22:17:12.165457 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:12.606880 1464250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.798845764s)
	I1002 22:17:12.606926 1464250 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.780095412s)
	I1002 22:17:12.606945 1464250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.662767941s)
	I1002 22:17:12.606965 1464250 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-230628" to be "Ready" ...
	I1002 22:17:12.633571 1464250 node_ready.go:49] node "default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:12.633598 1464250 node_ready.go:38] duration metric: took 26.619873ms for node "default-k8s-diff-port-230628" to be "Ready" ...
	I1002 22:17:12.633611 1464250 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:17:12.633667 1464250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:17:12.636084 1464250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.030999638s)
	I1002 22:17:12.639185 1464250 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-230628 addons enable metrics-server
	
	I1002 22:17:12.642199 1464250 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 22:17:12.645102 1464250 addons.go:514] duration metric: took 9.451804219s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 22:17:12.654910 1464250 api_server.go:72] duration metric: took 9.460557178s to wait for apiserver process to appear ...
	I1002 22:17:12.654986 1464250 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:17:12.655023 1464250 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1002 22:17:12.664190 1464250 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1002 22:17:12.666133 1464250 api_server.go:141] control plane version: v1.34.1
	I1002 22:17:12.666181 1464250 api_server.go:131] duration metric: took 11.174799ms to wait for apiserver health ...
	I1002 22:17:12.666191 1464250 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:17:12.670701 1464250 system_pods.go:59] 8 kube-system pods found
	I1002 22:17:12.670783 1464250 system_pods.go:61] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:12.670807 1464250 system_pods.go:61] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:17:12.670826 1464250 system_pods.go:61] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:17:12.670849 1464250 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:17:12.670869 1464250 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:17:12.670889 1464250 system_pods.go:61] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:17:12.670922 1464250 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:17:12.670956 1464250 system_pods.go:61] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Running
	I1002 22:17:12.670978 1464250 system_pods.go:74] duration metric: took 4.779311ms to wait for pod list to return data ...
	I1002 22:17:12.670999 1464250 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:17:12.674413 1464250 default_sa.go:45] found service account: "default"
	I1002 22:17:12.674491 1464250 default_sa.go:55] duration metric: took 3.468336ms for default service account to be created ...
	I1002 22:17:12.674515 1464250 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:17:12.680553 1464250 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:12.680630 1464250 system_pods.go:89] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:12.680659 1464250 system_pods.go:89] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:17:12.680678 1464250 system_pods.go:89] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:17:12.680700 1464250 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:17:12.680721 1464250 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:17:12.680750 1464250 system_pods.go:89] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:17:12.680772 1464250 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:17:12.680790 1464250 system_pods.go:89] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Running
	I1002 22:17:12.680814 1464250 system_pods.go:126] duration metric: took 6.281332ms to wait for k8s-apps to be running ...
	I1002 22:17:12.680841 1464250 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:17:12.680914 1464250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:17:12.697527 1464250 system_svc.go:56] duration metric: took 16.67669ms WaitForService to wait for kubelet
	I1002 22:17:12.697600 1464250 kubeadm.go:586] duration metric: took 9.503251307s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:17:12.697634 1464250 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:17:12.702159 1464250 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:17:12.702246 1464250 node_conditions.go:123] node cpu capacity is 2
	I1002 22:17:12.702279 1464250 node_conditions.go:105] duration metric: took 4.623843ms to run NodePressure ...
	I1002 22:17:12.702306 1464250 start.go:241] waiting for startup goroutines ...
	I1002 22:17:12.702326 1464250 start.go:246] waiting for cluster config update ...
	I1002 22:17:12.702350 1464250 start.go:255] writing updated cluster config ...
	I1002 22:17:12.702676 1464250 ssh_runner.go:195] Run: rm -f paused
	I1002 22:17:12.706651 1464250 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:17:12.712818 1464250 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jvqks" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:12.666440 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:13.166439 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:13.665999 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:14.165574 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:14.665914 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:15.165512 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:15.665798 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:16.166361 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:16.458779 1461647 kubeadm.go:1113] duration metric: took 4.586825209s to wait for elevateKubeSystemPrivileges
	I1002 22:17:16.458818 1461647 kubeadm.go:402] duration metric: took 28.362818929s to StartCluster
	I1002 22:17:16.458834 1461647 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:16.458901 1461647 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:17:16.460309 1461647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:16.460534 1461647 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:17:16.460673 1461647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:17:16.460941 1461647 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:17:16.460978 1461647 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:17:16.461037 1461647 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-080134"
	I1002 22:17:16.461052 1461647 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-080134"
	I1002 22:17:16.461073 1461647 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:17:16.461859 1461647 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:17:16.462004 1461647 addons.go:69] Setting default-storageclass=true in profile "embed-certs-080134"
	I1002 22:17:16.462017 1461647 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-080134"
	I1002 22:17:16.462286 1461647 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:17:16.470097 1461647 out.go:179] * Verifying Kubernetes components...
	I1002 22:17:16.475911 1461647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:17:16.506516 1461647 addons.go:238] Setting addon default-storageclass=true in "embed-certs-080134"
	I1002 22:17:16.506559 1461647 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:17:16.506990 1461647 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:17:16.513185 1461647 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:17:16.516051 1461647 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:16.516076 1461647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:17:16.516144 1461647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:17:16.543068 1461647 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:16.543090 1461647 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:17:16.543152 1461647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:17:16.567762 1461647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34566 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:17:16.579891 1461647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34566 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:17:17.128545 1461647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:17.152138 1461647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 22:17:17.152325 1461647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 22:17:14.759931 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:17.220444 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	I1002 22:17:17.432367 1461647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:18.492317 1461647 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.339938615s)
	I1002 22:17:18.492378 1461647 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.340158418s)
	I1002 22:17:18.492406 1461647 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 22:17:18.493493 1461647 node_ready.go:35] waiting up to 6m0s for node "embed-certs-080134" to be "Ready" ...
	I1002 22:17:18.492334 1461647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.363709152s)
	I1002 22:17:18.493868 1461647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.061434042s)
	I1002 22:17:18.546845 1461647 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 22:17:18.549820 1461647 addons.go:514] duration metric: took 2.088826282s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 22:17:18.997495 1461647 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-080134" context rescaled to 1 replicas
	W1002 22:17:20.497277 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:19.718973 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:21.728877 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:22.497342 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:24.998466 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:24.219680 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:26.720462 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:27.497483 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:29.996995 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:28.723923 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:31.220083 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:32.496612 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:34.496804 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:36.996831 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:33.719403 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:36.218685 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:38.996960 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:40.997613 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:38.718898 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:40.719450 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:42.721875 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:43.002209 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:45.500512 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:45.226594 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:47.718748 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	I1002 22:17:49.718479 1464250 pod_ready.go:94] pod "coredns-66bc5c9577-jvqks" is "Ready"
	I1002 22:17:49.718503 1464250 pod_ready.go:86] duration metric: took 37.005609154s for pod "coredns-66bc5c9577-jvqks" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.721438 1464250 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.726333 1464250 pod_ready.go:94] pod "etcd-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:49.726362 1464250 pod_ready.go:86] duration metric: took 4.900277ms for pod "etcd-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.730900 1464250 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.735612 1464250 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:49.735638 1464250 pod_ready.go:86] duration metric: took 4.702176ms for pod "kube-apiserver-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.737984 1464250 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.916418 1464250 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:49.916457 1464250 pod_ready.go:86] duration metric: took 178.447296ms for pod "kube-controller-manager-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:50.116788 1464250 pod_ready.go:83] waiting for pod "kube-proxy-4l9vx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:50.516203 1464250 pod_ready.go:94] pod "kube-proxy-4l9vx" is "Ready"
	I1002 22:17:50.516231 1464250 pod_ready.go:86] duration metric: took 399.418014ms for pod "kube-proxy-4l9vx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:50.716110 1464250 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:51.116917 1464250 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:51.116944 1464250 pod_ready.go:86] duration metric: took 400.805238ms for pod "kube-scheduler-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:51.116958 1464250 pod_ready.go:40] duration metric: took 38.410228374s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:17:51.181171 1464250 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:17:51.186197 1464250 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-230628" cluster and "default" namespace by default
	W1002 22:17:47.997091 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:50.497007 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:52.497079 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:54.997340 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:57.496730 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	I1002 22:17:58.496864 1461647 node_ready.go:49] node "embed-certs-080134" is "Ready"
	I1002 22:17:58.496899 1461647 node_ready.go:38] duration metric: took 40.003354073s for node "embed-certs-080134" to be "Ready" ...
	I1002 22:17:58.496917 1461647 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:17:58.496981 1461647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:17:58.517204 1461647 api_server.go:72] duration metric: took 42.056630926s to wait for apiserver process to appear ...
	I1002 22:17:58.517227 1461647 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:17:58.517246 1461647 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:17:58.526908 1461647 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:17:58.528262 1461647 api_server.go:141] control plane version: v1.34.1
	I1002 22:17:58.528349 1461647 api_server.go:131] duration metric: took 11.114049ms to wait for apiserver health ...
	I1002 22:17:58.528374 1461647 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:17:58.532518 1461647 system_pods.go:59] 8 kube-system pods found
	I1002 22:17:58.532575 1461647 system_pods.go:61] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:58.532582 1461647 system_pods.go:61] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:58.532589 1461647 system_pods.go:61] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:58.532593 1461647 system_pods.go:61] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:58.532602 1461647 system_pods.go:61] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:58.532606 1461647 system_pods.go:61] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:58.532616 1461647 system_pods.go:61] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:58.532622 1461647 system_pods.go:61] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:58.532630 1461647 system_pods.go:74] duration metric: took 4.23825ms to wait for pod list to return data ...
	I1002 22:17:58.532649 1461647 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:17:58.537529 1461647 default_sa.go:45] found service account: "default"
	I1002 22:17:58.537556 1461647 default_sa.go:55] duration metric: took 4.901081ms for default service account to be created ...
	I1002 22:17:58.537566 1461647 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:17:58.637591 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:58.637621 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:58.637628 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:58.637634 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:58.637638 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:58.637644 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:58.637648 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:58.637651 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:58.637657 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:58.637676 1461647 retry.go:31] will retry after 276.140742ms: missing components: kube-dns
	I1002 22:17:58.918387 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:58.918423 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:58.918431 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:58.918439 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:58.918444 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:58.918449 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:58.918453 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:58.918458 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:58.918465 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:58.918482 1461647 retry.go:31] will retry after 317.04108ms: missing components: kube-dns
	I1002 22:17:59.238846 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:59.238886 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:59.238893 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:59.238900 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:59.238904 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:59.238909 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:59.238913 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:59.238917 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:59.238923 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:59.238942 1461647 retry.go:31] will retry after 307.274217ms: missing components: kube-dns
	I1002 22:17:59.549838 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:59.549873 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Running
	I1002 22:17:59.549880 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:59.549887 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:59.549892 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:59.549897 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:59.549900 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:59.549904 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:59.549908 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Running
	I1002 22:17:59.549916 1461647 system_pods.go:126] duration metric: took 1.012345208s to wait for k8s-apps to be running ...
	I1002 22:17:59.549928 1461647 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:17:59.549993 1461647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:17:59.563622 1461647 system_svc.go:56] duration metric: took 13.684817ms WaitForService to wait for kubelet
	I1002 22:17:59.563650 1461647 kubeadm.go:586] duration metric: took 43.103081582s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:17:59.563668 1461647 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:17:59.566851 1461647 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:17:59.566886 1461647 node_conditions.go:123] node cpu capacity is 2
	I1002 22:17:59.566901 1461647 node_conditions.go:105] duration metric: took 3.227388ms to run NodePressure ...
	I1002 22:17:59.566913 1461647 start.go:241] waiting for startup goroutines ...
	I1002 22:17:59.566920 1461647 start.go:246] waiting for cluster config update ...
	I1002 22:17:59.566931 1461647 start.go:255] writing updated cluster config ...
	I1002 22:17:59.567215 1461647 ssh_runner.go:195] Run: rm -f paused
	I1002 22:17:59.571164 1461647 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:17:59.574876 1461647 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.580778 1461647 pod_ready.go:94] pod "coredns-66bc5c9577-n47rb" is "Ready"
	I1002 22:17:59.580806 1461647 pod_ready.go:86] duration metric: took 5.902795ms for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.583585 1461647 pod_ready.go:83] waiting for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.588251 1461647 pod_ready.go:94] pod "etcd-embed-certs-080134" is "Ready"
	I1002 22:17:59.588278 1461647 pod_ready.go:86] duration metric: took 4.667182ms for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.590708 1461647 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.595294 1461647 pod_ready.go:94] pod "kube-apiserver-embed-certs-080134" is "Ready"
	I1002 22:17:59.595357 1461647 pod_ready.go:86] duration metric: took 4.622867ms for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.597593 1461647 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.975568 1461647 pod_ready.go:94] pod "kube-controller-manager-embed-certs-080134" is "Ready"
	I1002 22:17:59.975595 1461647 pod_ready.go:86] duration metric: took 377.978017ms for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:00.177466 1461647 pod_ready.go:83] waiting for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:00.575634 1461647 pod_ready.go:94] pod "kube-proxy-7lq28" is "Ready"
	I1002 22:18:00.575741 1461647 pod_ready.go:86] duration metric: took 398.246371ms for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:00.776410 1461647 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:01.176131 1461647 pod_ready.go:94] pod "kube-scheduler-embed-certs-080134" is "Ready"
	I1002 22:18:01.176165 1461647 pod_ready.go:86] duration metric: took 399.727181ms for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:01.176180 1461647 pod_ready.go:40] duration metric: took 1.604978311s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:18:01.231027 1461647 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:18:01.236820 1461647 out.go:179] * Done! kubectl is now configured to use "embed-certs-080134" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.627316935Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.631767554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.631799193Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.631822405Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.634978574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.635012747Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.635034819Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.638111621Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.638141069Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.63816282Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.641187094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:17:51 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:17:51.641218798Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.362826815Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d50aa584-26db-43d7-a68f-8901d43e1760 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.364759449Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8fd5d5f8-090a-4b9b-90b5-cf9a31278b9a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.367317565Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97/dashboard-metrics-scraper" id=ab67223c-4455-438a-812d-aa7c42603464 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.367620222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.380370344Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.381924401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.41398863Z" level=info msg="Created container 7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97/dashboard-metrics-scraper" id=ab67223c-4455-438a-812d-aa7c42603464 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.415734067Z" level=info msg="Starting container: 7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137" id=b180cabd-2dd8-4069-b475-95e86d9454a6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.421276893Z" level=info msg="Started container" PID=1712 containerID=7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97/dashboard-metrics-scraper id=b180cabd-2dd8-4069-b475-95e86d9454a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ef238c47f38763a37f651b8f2950f6b7cedc20deecd96f96e0dd43b30d77c08
	Oct 02 22:18:00 default-k8s-diff-port-230628 conmon[1709]: conmon 7966756c2dbde4caab40 <ninfo>: container 1712 exited with status 1
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.757724462Z" level=info msg="Removing container: 72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00" id=85df246f-acad-4467-bceb-d9026339b314 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.764682508Z" level=info msg="Error loading conmon cgroup of container 72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00: cgroup deleted" id=85df246f-acad-4467-bceb-d9026339b314 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:18:00 default-k8s-diff-port-230628 crio[648]: time="2025-10-02T22:18:00.768032265Z" level=info msg="Removed container 72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97/dashboard-metrics-scraper" id=85df246f-acad-4467-bceb-d9026339b314 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	7966756c2dbde       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   7ef238c47f387       dashboard-metrics-scraper-6ffb444bf9-x8x97             kubernetes-dashboard
	a7993422fac72       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   01558f0fa4734       storage-provisioner                                    kube-system
	cc4ee39a94830       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   60a4d9611d04a       kubernetes-dashboard-855c9754f9-p8jr6                  kubernetes-dashboard
	da714324f023d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   394c4a97ce874       busybox                                                default
	fee643046efa5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   fc82e120ce2b0       coredns-66bc5c9577-jvqks                               kube-system
	809868061d07f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   2df55607a758c       kindnet-lvsjr                                          kube-system
	cc20e87d8de3a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   01558f0fa4734       storage-provisioner                                    kube-system
	c64d369cac403       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   ec255bf28850e       kube-proxy-4l9vx                                       kube-system
	01235239b3d7b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   af739bcd83de7       kube-apiserver-default-k8s-diff-port-230628            kube-system
	f86d294e2252f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   a8532827a39c1       kube-scheduler-default-k8s-diff-port-230628            kube-system
	d628dbd9a32a7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1f003e04b4e94       etcd-default-k8s-diff-port-230628                      kube-system
	0ad0bf20345ab       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   a15196cfdd9b2       kube-controller-manager-default-k8s-diff-port-230628   kube-system
	
	
	==> coredns [fee643046efa55d27a185fe7948708690f63a1b1ed6d6f57c5ba3929605c01f6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57874 - 2420 "HINFO IN 233656523258483718.1371668010258019426. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.043060957s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-230628
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-230628
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=default-k8s-diff-port-230628
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_15_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:15:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-230628
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:18:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:17:41 +0000   Thu, 02 Oct 2025 22:15:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:17:41 +0000   Thu, 02 Oct 2025 22:15:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:17:41 +0000   Thu, 02 Oct 2025 22:15:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:17:41 +0000   Thu, 02 Oct 2025 22:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-230628
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb3f0d65bf5f4e71ad6317cfad520713
	  System UUID:                6317d78e-133b-4770-8f0b-21b4d8e9ee44
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-jvqks                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 etcd-default-k8s-diff-port-230628                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m31s
	  kube-system                 kindnet-lvsjr                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-230628             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-230628    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-4l9vx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-230628             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-x8x97              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-p8jr6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m24s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Warning  CgroupV1                 2m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m41s (x8 over 2m41s)  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m41s (x8 over 2m41s)  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m41s (x8 over 2m41s)  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m31s                  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m31s                  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s                  kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m27s                  node-controller  Node default-k8s-diff-port-230628 event: Registered Node default-k8s-diff-port-230628 in Controller
	  Normal   NodeReady                104s                   kubelet          Node default-k8s-diff-port-230628 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-230628 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node default-k8s-diff-port-230628 event: Registered Node default-k8s-diff-port-230628 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d628dbd9a32a7daf04eca0db048fc51c27a1ad8a1f6949519b2d129d15e23abf] <==
	{"level":"warn","ts":"2025-10-02T22:17:07.421208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.431521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.474409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.522828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.566962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.623150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.647455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.680848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.722440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.769234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.798269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.828376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.843088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.870278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.917584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.943994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:07.959791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.013189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.036936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.063849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.089450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.133697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.165319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.207696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:08.369585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43456","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:18:09 up  7:00,  0 user,  load average: 2.70, 2.55, 2.17
	Linux default-k8s-diff-port-230628 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [809868061d07f29dabb55aa5c484c5f78ef5dec1d1e5ea99ae41f8744323b963] <==
	I1002 22:17:11.403426       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:17:11.403682       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 22:17:11.403823       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:17:11.403834       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:17:11.403847       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:17:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:17:11.630322       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:17:11.634597       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:17:11.634716       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:17:11.635468       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:17:41.625412       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:17:41.631024       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 22:17:41.635583       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:17:41.636647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 22:17:42.935572       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:17:42.935618       1 metrics.go:72] Registering metrics
	I1002 22:17:42.935676       1 controller.go:711] "Syncing nftables rules"
	I1002 22:17:51.626148       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:17:51.626222       1 main.go:301] handling current node
	I1002 22:18:01.634137       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:18:01.634171       1 main.go:301] handling current node
	
	
	==> kube-apiserver [01235239b3d7b66144bf8695871cb046f63119cbc8df4573140b6f6c6e591533] <==
	I1002 22:17:09.966491       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:17:09.973878       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:17:09.977670       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 22:17:10.026741       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:17:10.026984       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:17:10.057553       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 22:17:10.057617       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:17:10.076635       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:17:10.076657       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:17:10.076798       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 22:17:10.077896       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 22:17:10.077933       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 22:17:10.112875       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1002 22:17:10.233979       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 22:17:10.321567       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:17:10.356353       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:17:12.073957       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 22:17:12.313118       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:17:12.386862       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:17:12.421116       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:17:12.550109       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.129.85"}
	I1002 22:17:12.621124       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.184.135"}
	I1002 22:17:14.196675       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 22:17:14.249042       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 22:17:14.595959       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0ad0bf20345abfcc25bcf338fba89dbe33e5f1b70801eda12faa774271ccb6cc] <==
	I1002 22:17:14.141590       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 22:17:14.141124       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:17:14.140102       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 22:17:14.141157       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 22:17:14.146678       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 22:17:14.146894       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:17:14.151447       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:17:14.151735       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 22:17:14.154172       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:17:14.154325       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 22:17:14.155366       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 22:17:14.156677       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:17:14.159965       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 22:17:14.160086       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 22:17:14.160184       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-230628"
	I1002 22:17:14.160257       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 22:17:14.163914       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 22:17:14.190884       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 22:17:14.191041       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1002 22:17:14.191084       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 22:17:14.198651       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:17:14.199803       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 22:17:14.203965       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 22:17:14.206804       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 22:17:14.209141       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [c64d369cac4035241b589e76349999be9beaef376d813cab8178ed3ae1854b68] <==
	I1002 22:17:11.809175       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:17:12.203575       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:17:12.326654       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:17:12.326703       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 22:17:12.326771       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:17:12.436387       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:17:12.436525       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:17:12.476767       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:17:12.477167       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:17:12.477436       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:17:12.485846       1 config.go:200] "Starting service config controller"
	I1002 22:17:12.485919       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:17:12.485937       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:17:12.485941       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:17:12.485952       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:17:12.485957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:17:12.489171       1 config.go:309] "Starting node config controller"
	I1002 22:17:12.489252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:17:12.489282       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:17:12.587346       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:17:12.591728       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:17:12.591766       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f86d294e2252fadd80d85a001b6e4a46abde9a9392fd88c77dde413f0ee440ad] <==
	I1002 22:17:08.202319       1 serving.go:386] Generated self-signed cert in-memory
	I1002 22:17:10.797112       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 22:17:10.797150       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:17:10.935742       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:17:10.935880       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 22:17:10.935910       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 22:17:10.935931       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 22:17:10.967557       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:17:10.967580       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:17:10.967600       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:17:10.967607       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:17:11.049345       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 22:17:11.080412       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:17:11.080484       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:17:15 default-k8s-diff-port-230628 kubelet[777]: W1002 22:17:15.113189     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/75dade69ea955804ab512a02834e37523fe4f046c0ebe7e6f6dbdea89e50f2ef/crio-60a4d9611d04a3a7a8c87e44a7f86b2ffa95a88d1b9de8076429e7aafd77d50c WatchSource:0}: Error finding container 60a4d9611d04a3a7a8c87e44a7f86b2ffa95a88d1b9de8076429e7aafd77d50c: Status 404 returned error can't find the container with id 60a4d9611d04a3a7a8c87e44a7f86b2ffa95a88d1b9de8076429e7aafd77d50c
	Oct 02 22:17:19 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:19.363609     777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 22:17:20 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:20.640054     777 scope.go:117] "RemoveContainer" containerID="e879b43716f0d3fbb6ba379088b7a054c3a0de637fd144003a8599673acde384"
	Oct 02 22:17:21 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:21.649808     777 scope.go:117] "RemoveContainer" containerID="e879b43716f0d3fbb6ba379088b7a054c3a0de637fd144003a8599673acde384"
	Oct 02 22:17:21 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:21.649920     777 scope.go:117] "RemoveContainer" containerID="2cbe553ef03dd4e294f8ef8fe5a20c0fcf2f3401de3821dae52995e1ebd25560"
	Oct 02 22:17:21 default-k8s-diff-port-230628 kubelet[777]: E1002 22:17:21.653521     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:17:22 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:22.653743     777 scope.go:117] "RemoveContainer" containerID="2cbe553ef03dd4e294f8ef8fe5a20c0fcf2f3401de3821dae52995e1ebd25560"
	Oct 02 22:17:22 default-k8s-diff-port-230628 kubelet[777]: E1002 22:17:22.653880     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:17:25 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:25.054419     777 scope.go:117] "RemoveContainer" containerID="2cbe553ef03dd4e294f8ef8fe5a20c0fcf2f3401de3821dae52995e1ebd25560"
	Oct 02 22:17:25 default-k8s-diff-port-230628 kubelet[777]: E1002 22:17:25.054589     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:17:36 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:36.361007     777 scope.go:117] "RemoveContainer" containerID="2cbe553ef03dd4e294f8ef8fe5a20c0fcf2f3401de3821dae52995e1ebd25560"
	Oct 02 22:17:36 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:36.694204     777 scope.go:117] "RemoveContainer" containerID="2cbe553ef03dd4e294f8ef8fe5a20c0fcf2f3401de3821dae52995e1ebd25560"
	Oct 02 22:17:37 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:37.698482     777 scope.go:117] "RemoveContainer" containerID="72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00"
	Oct 02 22:17:37 default-k8s-diff-port-230628 kubelet[777]: E1002 22:17:37.698642     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:17:37 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:37.712124     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-p8jr6" podStartSLOduration=13.340211897 podStartE2EDuration="23.712106258s" podCreationTimestamp="2025-10-02 22:17:14 +0000 UTC" firstStartedPulling="2025-10-02 22:17:15.116018265 +0000 UTC m=+14.232334882" lastFinishedPulling="2025-10-02 22:17:25.487912626 +0000 UTC m=+24.604229243" observedRunningTime="2025-10-02 22:17:25.691999722 +0000 UTC m=+24.808316355" watchObservedRunningTime="2025-10-02 22:17:37.712106258 +0000 UTC m=+36.828422875"
	Oct 02 22:17:41 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:41.709270     777 scope.go:117] "RemoveContainer" containerID="cc20e87d8de3a0fdc4fed95d57eccb1fc685cc5777882f10e8b13feea7ad8e51"
	Oct 02 22:17:45 default-k8s-diff-port-230628 kubelet[777]: I1002 22:17:45.054128     777 scope.go:117] "RemoveContainer" containerID="72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00"
	Oct 02 22:17:45 default-k8s-diff-port-230628 kubelet[777]: E1002 22:17:45.054381     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:18:00 default-k8s-diff-port-230628 kubelet[777]: I1002 22:18:00.361654     777 scope.go:117] "RemoveContainer" containerID="72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00"
	Oct 02 22:18:00 default-k8s-diff-port-230628 kubelet[777]: I1002 22:18:00.756429     777 scope.go:117] "RemoveContainer" containerID="72cf8d20fcf602bfa248934a6e1e6efa5d48078bbe710a9fc4ecb7e643cfbb00"
	Oct 02 22:18:01 default-k8s-diff-port-230628 kubelet[777]: I1002 22:18:01.761120     777 scope.go:117] "RemoveContainer" containerID="7966756c2dbde4caab40ec5b73716c9776c9e99a78970f5313adcbd69f750137"
	Oct 02 22:18:01 default-k8s-diff-port-230628 kubelet[777]: E1002 22:18:01.761288     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8x97_kubernetes-dashboard(de31ff54-7189-4981-91fa-5dbb4afadedb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8x97" podUID="de31ff54-7189-4981-91fa-5dbb4afadedb"
	Oct 02 22:18:03 default-k8s-diff-port-230628 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:18:03 default-k8s-diff-port-230628 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:18:03 default-k8s-diff-port-230628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cc4ee39a94830173206b7e8710c573909fde2513242b876906c16d6dff3d2c4c] <==
	2025/10/02 22:17:25 Starting overwatch
	2025/10/02 22:17:25 Using namespace: kubernetes-dashboard
	2025/10/02 22:17:25 Using in-cluster config to connect to apiserver
	2025/10/02 22:17:25 Using secret token for csrf signing
	2025/10/02 22:17:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 22:17:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 22:17:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 22:17:25 Generating JWE encryption key
	2025/10/02 22:17:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 22:17:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 22:17:26 Initializing JWE encryption key from synchronized object
	2025/10/02 22:17:26 Creating in-cluster Sidecar client
	2025/10/02 22:17:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:17:26 Serving insecurely on HTTP port: 9090
	2025/10/02 22:17:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a7993422fac72852a98c0079ccc3410fc8956c0994cbb0968feb6358a834dc24] <==
	I1002 22:17:41.775905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:17:41.776031       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 22:17:41.778489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:45.234660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:49.497288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:53.098216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:56.152106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:59.174187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:59.179385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:17:59.179548       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:17:59.179755       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-230628_ad20342c-abe7-4f6b-bda8-c92f988bdfa7!
	I1002 22:17:59.180236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40ee210b-5507-4164-84d5-3e6947882443", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-230628_ad20342c-abe7-4f6b-bda8-c92f988bdfa7 became leader
	W1002 22:17:59.188049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:59.191305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:17:59.280818       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-230628_ad20342c-abe7-4f6b-bda8-c92f988bdfa7!
	W1002 22:18:01.196274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:01.206239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:03.209794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:03.215679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:05.222346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:05.229912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:07.233375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:07.240080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:09.244920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:09.251985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cc20e87d8de3a0fdc4fed95d57eccb1fc685cc5777882f10e8b13feea7ad8e51] <==
	I1002 22:17:11.660577       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 22:17:41.662338       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628: exit status 2 (414.010482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-230628 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-080134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-080134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (307.659241ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:18:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-080134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-080134 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-080134 describe deploy/metrics-server -n kube-system: exit status 1 (79.550211ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-080134 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-080134
helpers_test.go:243: (dbg) docker inspect embed-certs-080134:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e",
	        "Created": "2025-10-02T22:16:37.741033428Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1462173,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:16:37.803428565Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/hosts",
	        "LogPath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e-json.log",
	        "Name": "/embed-certs-080134",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-080134:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-080134",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e",
	                "LowerDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-080134",
	                "Source": "/var/lib/docker/volumes/embed-certs-080134/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-080134",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-080134",
	                "name.minikube.sigs.k8s.io": "embed-certs-080134",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "010b7ef7c9a88215a950a5d6a36e0cbb8c2fd4884225845e23ea59e1309c12e2",
	            "SandboxKey": "/var/run/docker/netns/010b7ef7c9a8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34566"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34567"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34570"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34568"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34569"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-080134": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:00:d6:a8:2b:58",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8a64f7585ec4aa24b8094a59cd780b3d89a1239c63c189f2097d1ca2a382a6ac",
	                    "EndpointID": "8d133b53255c80e7b2d21870f501333670459a7da758e40d89fb14d4a67494d0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-080134",
	                        "d75a770c7fe5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-080134 -n embed-certs-080134
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-080134 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-080134 logs -n 25: (1.757921179s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-options-280401 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ cert-options-280401 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ ssh     │ -p cert-options-280401 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ delete  │ -p cert-options-280401                                                                                                                                                                                                                        │ cert-options-280401          │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:13 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:13 UTC │ 02 Oct 25 22:14 UTC │
	│ start   │ -p cert-expiration-247949 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-173127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │                     │
	│ stop    │ -p old-k8s-version-173127 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ delete  │ -p cert-expiration-247949                                                                                                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-173127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ image   │ old-k8s-version-173127 image list --format=json                                                                                                                                                                                               │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ pause   │ -p old-k8s-version-173127 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-230628 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-230628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-230628 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ pause   │ -p default-k8s-diff-port-230628 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-080134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:16:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:16:52.815690 1464250 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:16:52.815816 1464250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:16:52.815822 1464250 out.go:374] Setting ErrFile to fd 2...
	I1002 22:16:52.815827 1464250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:16:52.816195 1464250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:16:52.816652 1464250 out.go:368] Setting JSON to false
	I1002 22:16:52.817603 1464250 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25138,"bootTime":1759418275,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:16:52.817703 1464250 start.go:140] virtualization:  
	I1002 22:16:52.820985 1464250 out.go:179] * [default-k8s-diff-port-230628] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:16:52.825015 1464250 notify.go:220] Checking for updates...
	I1002 22:16:52.825905 1464250 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:16:52.829013 1464250 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:16:52.832208 1464250 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:16:52.835173 1464250 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:16:52.838076 1464250 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:16:52.841779 1464250 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:16:52.845265 1464250 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:16:52.845844 1464250 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:16:52.885245 1464250 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:16:52.885452 1464250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:16:52.979074 1464250 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 22:16:52.969867837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:16:52.979176 1464250 docker.go:318] overlay module found
	I1002 22:16:52.982431 1464250 out.go:179] * Using the docker driver based on existing profile
	I1002 22:16:52.985315 1464250 start.go:304] selected driver: docker
	I1002 22:16:52.985335 1464250 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:16:52.985449 1464250 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:16:52.986258 1464250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:16:53.084126 1464250 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-02 22:16:53.06918457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:16:53.084467 1464250 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:16:53.084504 1464250 cni.go:84] Creating CNI manager for ""
	I1002 22:16:53.084559 1464250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:16:53.084591 1464250 start.go:348] cluster config:
	{Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:16:53.087838 1464250 out.go:179] * Starting "default-k8s-diff-port-230628" primary control-plane node in "default-k8s-diff-port-230628" cluster
	I1002 22:16:53.090692 1464250 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:16:53.093491 1464250 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:16:53.096271 1464250 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:16:53.096326 1464250 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:16:53.096335 1464250 cache.go:58] Caching tarball of preloaded images
	I1002 22:16:53.096433 1464250 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:16:53.096443 1464250 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:16:53.096554 1464250 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/config.json ...
	I1002 22:16:53.096778 1464250 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:16:53.123538 1464250 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:16:53.123557 1464250 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:16:53.123580 1464250 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:16:53.123605 1464250 start.go:360] acquireMachinesLock for default-k8s-diff-port-230628: {Name:mk03e8992f46bd2d7f7874118d4f399e26ab9e4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:16:53.123670 1464250 start.go:364] duration metric: took 47.04µs to acquireMachinesLock for "default-k8s-diff-port-230628"
	I1002 22:16:53.123691 1464250 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:16:53.123704 1464250 fix.go:54] fixHost starting: 
	I1002 22:16:53.123992 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:16:53.144033 1464250 fix.go:112] recreateIfNeeded on default-k8s-diff-port-230628: state=Stopped err=<nil>
	W1002 22:16:53.144061 1464250 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:16:53.494318 1461647 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:16:53.495105 1461647 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:16:54.209761 1461647 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:16:54.898068 1461647 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:16:55.163488 1461647 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:16:56.207823 1461647 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:16:56.417222 1461647 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:16:56.418360 1461647 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:16:56.426417 1461647 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:16:56.430091 1461647 out.go:252]   - Booting up control plane ...
	I1002 22:16:56.430209 1461647 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:16:56.430302 1461647 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:16:56.430374 1461647 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:16:56.449134 1461647 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:16:56.449255 1461647 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:16:56.458304 1461647 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:16:56.458417 1461647 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:16:56.458470 1461647 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:16:56.582476 1461647 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:16:56.582603 1461647 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:16:53.147360 1464250 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-230628" ...
	I1002 22:16:53.147462 1464250 cli_runner.go:164] Run: docker start default-k8s-diff-port-230628
	I1002 22:16:53.486318 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:16:53.511179 1464250 kic.go:430] container "default-k8s-diff-port-230628" state is running.
	I1002 22:16:53.511595 1464250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:16:53.540322 1464250 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/config.json ...
	I1002 22:16:53.540565 1464250 machine.go:93] provisionDockerMachine start ...
	I1002 22:16:53.540629 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:53.564571 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:53.564883 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:53.564892 1464250 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:16:53.570095 1464250 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:16:56.722513 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-230628
	
	I1002 22:16:56.722536 1464250 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-230628"
	I1002 22:16:56.722616 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:56.745126 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:56.745433 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:56.745450 1464250 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-230628 && echo "default-k8s-diff-port-230628" | sudo tee /etc/hostname
	I1002 22:16:56.909371 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-230628
	
	I1002 22:16:56.909492 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:56.935785 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:56.936109 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:56.936127 1464250 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-230628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-230628/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-230628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:16:57.078389 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:16:57.078416 1464250 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:16:57.078452 1464250 ubuntu.go:190] setting up certificates
	I1002 22:16:57.078464 1464250 provision.go:84] configureAuth start
	I1002 22:16:57.078528 1464250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:16:57.103962 1464250 provision.go:143] copyHostCerts
	I1002 22:16:57.104043 1464250 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:16:57.104067 1464250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:16:57.104157 1464250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:16:57.104282 1464250 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:16:57.104293 1464250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:16:57.104323 1464250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:16:57.104417 1464250 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:16:57.104435 1464250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:16:57.104470 1464250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:16:57.104542 1464250 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-230628 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-230628 localhost minikube]
	I1002 22:16:57.173268 1464250 provision.go:177] copyRemoteCerts
	I1002 22:16:57.173348 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:16:57.173401 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.197458 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:57.310188 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:16:57.337815 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 22:16:57.363629 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:16:57.393898 1464250 provision.go:87] duration metric: took 315.415822ms to configureAuth
	I1002 22:16:57.393966 1464250 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:16:57.394255 1464250 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:16:57.394409 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.419379 1464250 main.go:141] libmachine: Using SSH client type: native
	I1002 22:16:57.419762 1464250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34571 <nil> <nil>}
	I1002 22:16:57.419788 1464250 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:16:57.747609 1464250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:16:57.747631 1464250 machine.go:96] duration metric: took 4.207056415s to provisionDockerMachine
	I1002 22:16:57.747678 1464250 start.go:293] postStartSetup for "default-k8s-diff-port-230628" (driver="docker")
	I1002 22:16:57.747689 1464250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:16:57.747749 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:16:57.747799 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.767729 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:57.866276 1464250 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:16:57.869924 1464250 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:16:57.869952 1464250 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:16:57.869963 1464250 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:16:57.870021 1464250 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:16:57.870125 1464250 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:16:57.870234 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:16:57.877904 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:16:57.896072 1464250 start.go:296] duration metric: took 148.377821ms for postStartSetup
	I1002 22:16:57.896224 1464250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:16:57.896319 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:57.913370 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:58.011325 1464250 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:16:58.017811 1464250 fix.go:56] duration metric: took 4.894105584s for fixHost
	I1002 22:16:58.017838 1464250 start.go:83] releasing machines lock for "default-k8s-diff-port-230628", held for 4.894158063s
	I1002 22:16:58.017916 1464250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-230628
	I1002 22:16:58.043658 1464250 ssh_runner.go:195] Run: cat /version.json
	I1002 22:16:58.043745 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:58.044077 1464250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:16:58.044149 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:16:58.085645 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:58.093817 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:16:58.328980 1464250 ssh_runner.go:195] Run: systemctl --version
	I1002 22:16:58.338589 1464250 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:16:58.403158 1464250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:16:58.410450 1464250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:16:58.410529 1464250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:16:58.420452 1464250 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:16:58.420472 1464250 start.go:495] detecting cgroup driver to use...
	I1002 22:16:58.420503 1464250 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:16:58.420550 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:16:58.441159 1464250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:16:58.469385 1464250 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:16:58.469532 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:16:58.489039 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:16:58.514417 1464250 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:16:58.729270 1464250 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:16:58.899442 1464250 docker.go:234] disabling docker service ...
	I1002 22:16:58.899528 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:16:58.927300 1464250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:16:58.956494 1464250 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:16:59.153935 1464250 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:16:59.380952 1464250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:16:59.404925 1464250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:16:59.428535 1464250 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:16:59.428602 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.443195 1464250 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:16:59.443275 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.453072 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.470762 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.480680 1464250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:16:59.495366 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.511397 1464250 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.525950 1464250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:16:59.542756 1464250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:16:59.555536 1464250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:16:59.567342 1464250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:16:59.777346 1464250 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:16:59.996294 1464250 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:16:59.996443 1464250 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:17:00.000992 1464250 start.go:563] Will wait 60s for crictl version
	I1002 22:17:00.001101 1464250 ssh_runner.go:195] Run: which crictl
	I1002 22:17:00.006874 1464250 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:17:00.049873 1464250 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:17:00.049989 1464250 ssh_runner.go:195] Run: crio --version
	I1002 22:17:00.113645 1464250 ssh_runner.go:195] Run: crio --version
	I1002 22:17:00.192200 1464250 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:16:57.582618 1461647 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001518043s
	I1002 22:16:57.586746 1461647 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:16:57.586851 1461647 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 22:16:57.586982 1461647 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:16:57.587073 1461647 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:17:00.195337 1464250 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-230628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:17:00.224465 1464250 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:17:00.234568 1464250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:17:00.266674 1464250 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:17:00.266810 1464250 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:17:00.266880 1464250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:17:00.354979 1464250 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:17:00.355085 1464250 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:17:00.355202 1464250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:17:00.405746 1464250 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:17:00.405772 1464250 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:17:00.405795 1464250 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1002 22:17:00.405906 1464250 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-230628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:17:00.405997 1464250 ssh_runner.go:195] Run: crio config
	I1002 22:17:00.513755 1464250 cni.go:84] Creating CNI manager for ""
	I1002 22:17:00.513782 1464250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:17:00.513809 1464250 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:17:00.513840 1464250 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-230628 NodeName:default-k8s-diff-port-230628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:17:00.513999 1464250 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-230628"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:17:00.514105 1464250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:17:00.526167 1464250 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:17:00.526268 1464250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:17:00.538316 1464250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 22:17:00.561096 1464250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:17:00.587710 1464250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1002 22:17:00.616497 1464250 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:17:00.626507 1464250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:17:00.645309 1464250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:17:00.849459 1464250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:17:00.879244 1464250 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628 for IP: 192.168.76.2
	I1002 22:17:00.879319 1464250 certs.go:195] generating shared ca certs ...
	I1002 22:17:00.879364 1464250 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:00.879591 1464250 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:17:00.879694 1464250 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:17:00.879734 1464250 certs.go:257] generating profile certs ...
	I1002 22:17:00.879888 1464250 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.key
	I1002 22:17:00.880000 1464250 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key.2d20e595
	I1002 22:17:00.880084 1464250 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key
	I1002 22:17:00.880249 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:17:00.880327 1464250 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:17:00.880373 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:17:00.880479 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:17:00.880557 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:17:00.880606 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:17:00.880718 1464250 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:17:00.881770 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:17:00.946651 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:17:01.000213 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:17:01.078566 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:17:01.166792 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1002 22:17:01.226124 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:17:01.295374 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:17:01.350643 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:17:01.399862 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:17:01.436700 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:17:01.493113 1464250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:17:01.539726 1464250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:17:01.564705 1464250 ssh_runner.go:195] Run: openssl version
	I1002 22:17:01.575769 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:17:01.588296 1464250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:17:01.592905 1464250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:17:01.593075 1464250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:17:01.662785 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:17:01.675327 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:17:01.699302 1464250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:17:01.703743 1464250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:17:01.703887 1464250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:17:01.778073 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:17:01.802386 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:17:01.832716 1464250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:17:01.837808 1464250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:17:01.837956 1464250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:17:01.957141 1464250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:17:01.975541 1464250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:17:01.983041 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:17:02.120630 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:17:02.356670 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:17:02.597674 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:17:02.719438 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:17:02.835972 1464250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:17:02.935511 1464250 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-230628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-230628 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:17:02.935685 1464250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:17:02.935805 1464250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:17:03.063735 1464250 cri.go:89] found id: "01235239b3d7b66144bf8695871cb046f63119cbc8df4573140b6f6c6e591533"
	I1002 22:17:03.063817 1464250 cri.go:89] found id: "f86d294e2252fadd80d85a001b6e4a46abde9a9392fd88c77dde413f0ee440ad"
	I1002 22:17:03.063838 1464250 cri.go:89] found id: "d628dbd9a32a7daf04eca0db048fc51c27a1ad8a1f6949519b2d129d15e23abf"
	I1002 22:17:03.063863 1464250 cri.go:89] found id: "0ad0bf20345abfcc25bcf338fba89dbe33e5f1b70801eda12faa774271ccb6cc"
	I1002 22:17:03.063903 1464250 cri.go:89] found id: ""
	I1002 22:17:03.064013 1464250 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:17:03.102669 1464250 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:17:03Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:17:03.102852 1464250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:17:03.138162 1464250 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:17:03.138183 1464250 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:17:03.138242 1464250 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:17:03.154641 1464250 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:17:03.155232 1464250 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-230628" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:17:03.155450 1464250 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-230628" cluster setting kubeconfig missing "default-k8s-diff-port-230628" context setting]
	I1002 22:17:03.155902 1464250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:03.157939 1464250 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:17:03.191443 1464250 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 22:17:03.191529 1464250 kubeadm.go:601] duration metric: took 53.339576ms to restartPrimaryControlPlane
	I1002 22:17:03.191564 1464250 kubeadm.go:402] duration metric: took 256.062788ms to StartCluster
	I1002 22:17:03.191608 1464250 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:03.191760 1464250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:17:03.192606 1464250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:03.193181 1464250 config.go:182] Loaded profile config "default-k8s-diff-port-230628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:17:03.193294 1464250 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:17:03.193376 1464250 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-230628"
	I1002 22:17:03.193392 1464250 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-230628"
	W1002 22:17:03.193399 1464250 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:17:03.193421 1464250 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:17:03.193952 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.193267 1464250 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:17:03.194791 1464250 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-230628"
	I1002 22:17:03.194824 1464250 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-230628"
	W1002 22:17:03.194832 1464250 addons.go:247] addon dashboard should already be in state true
	I1002 22:17:03.194858 1464250 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:17:03.195313 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.195535 1464250 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-230628"
	I1002 22:17:03.195558 1464250 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-230628"
	I1002 22:17:03.195827 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.211810 1464250 out.go:179] * Verifying Kubernetes components...
	I1002 22:17:03.215190 1464250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:17:03.256729 1464250 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:17:03.262848 1464250 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-230628"
	W1002 22:17:03.262873 1464250 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:17:03.262900 1464250 host.go:66] Checking if "default-k8s-diff-port-230628" exists ...
	I1002 22:17:03.263503 1464250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-230628 --format={{.State.Status}}
	I1002 22:17:03.264020 1464250 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:17:03.266167 1464250 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:03.266192 1464250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:17:03.266266 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:17:03.269780 1464250 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 22:17:04.442415 1461647 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.851469395s
	I1002 22:17:03.272664 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:17:03.272693 1464250 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:17:03.272773 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:17:03.315483 1464250 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:03.315505 1464250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:17:03.315578 1464250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-230628
	I1002 22:17:03.321039 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:17:03.346244 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:17:03.364512 1464250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34571 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/default-k8s-diff-port-230628/id_rsa Username:docker}
	I1002 22:17:03.807998 1464250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:03.826796 1464250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:17:03.854664 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:17:03.854685 1464250 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:17:03.944157 1464250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:04.013117 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:17:04.013140 1464250 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:17:04.123747 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:17:04.123823 1464250 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:17:04.288279 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:17:04.288354 1464250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:17:04.364870 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:17:04.364959 1464250 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:17:04.407691 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:17:04.407764 1464250 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:17:04.480556 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:17:04.480585 1464250 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:17:04.521232 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:17:04.521258 1464250 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:17:04.559450 1464250 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:17:04.559476 1464250 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:17:04.605034 1464250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:17:07.541309 1461647 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.953707302s
	I1002 22:17:09.588432 1461647 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 12.001620033s
	I1002 22:17:09.621336 1461647 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 22:17:09.639794 1461647 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 22:17:09.670650 1461647 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 22:17:09.671131 1461647 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-080134 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 22:17:09.705280 1461647 kubeadm.go:318] [bootstrap-token] Using token: n45vfv.yum1oz8wyqc2j4g1
	I1002 22:17:09.708338 1461647 out.go:252]   - Configuring RBAC rules ...
	I1002 22:17:09.712243 1461647 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 22:17:09.740982 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 22:17:09.759389 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 22:17:09.766849 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 22:17:09.771823 1461647 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 22:17:09.779533 1461647 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 22:17:09.996111 1461647 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 22:17:10.584421 1461647 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 22:17:11.003554 1461647 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 22:17:11.005488 1461647 kubeadm.go:318] 
	I1002 22:17:11.005580 1461647 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 22:17:11.005594 1461647 kubeadm.go:318] 
	I1002 22:17:11.005676 1461647 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 22:17:11.005686 1461647 kubeadm.go:318] 
	I1002 22:17:11.005712 1461647 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 22:17:11.008239 1461647 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 22:17:11.008309 1461647 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 22:17:11.008324 1461647 kubeadm.go:318] 
	I1002 22:17:11.008379 1461647 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 22:17:11.008389 1461647 kubeadm.go:318] 
	I1002 22:17:11.008436 1461647 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 22:17:11.008445 1461647 kubeadm.go:318] 
	I1002 22:17:11.008497 1461647 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 22:17:11.008576 1461647 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 22:17:11.008648 1461647 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 22:17:11.008658 1461647 kubeadm.go:318] 
	I1002 22:17:11.008769 1461647 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 22:17:11.008849 1461647 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 22:17:11.008860 1461647 kubeadm.go:318] 
	I1002 22:17:11.008943 1461647 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token n45vfv.yum1oz8wyqc2j4g1 \
	I1002 22:17:11.009049 1461647 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 22:17:11.009073 1461647 kubeadm.go:318] 	--control-plane 
	I1002 22:17:11.009083 1461647 kubeadm.go:318] 
	I1002 22:17:11.009186 1461647 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 22:17:11.009198 1461647 kubeadm.go:318] 
	I1002 22:17:11.009279 1461647 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token n45vfv.yum1oz8wyqc2j4g1 \
	I1002 22:17:11.009383 1461647 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 22:17:11.023465 1461647 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:17:11.023726 1461647 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:17:11.023853 1461647 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:17:11.023875 1461647 cni.go:84] Creating CNI manager for ""
	I1002 22:17:11.023882 1461647 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:17:11.027244 1461647 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 22:17:11.030352 1461647 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:17:11.035499 1461647 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 22:17:11.035518 1461647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 22:17:11.081114 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:17:11.871876 1461647 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:17:11.872009 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:11.872082 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-080134 minikube.k8s.io/updated_at=2025_10_02T22_17_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=embed-certs-080134 minikube.k8s.io/primary=true
	I1002 22:17:12.165350 1461647 ops.go:34] apiserver oom_adj: -16
	I1002 22:17:12.165457 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:12.606880 1464250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.798845764s)
	I1002 22:17:12.606926 1464250 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.780095412s)
	I1002 22:17:12.606945 1464250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.662767941s)
	I1002 22:17:12.606965 1464250 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-230628" to be "Ready" ...
	I1002 22:17:12.633571 1464250 node_ready.go:49] node "default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:12.633598 1464250 node_ready.go:38] duration metric: took 26.619873ms for node "default-k8s-diff-port-230628" to be "Ready" ...
	I1002 22:17:12.633611 1464250 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:17:12.633667 1464250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:17:12.636084 1464250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.030999638s)
	I1002 22:17:12.639185 1464250 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-230628 addons enable metrics-server
	
	I1002 22:17:12.642199 1464250 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1002 22:17:12.645102 1464250 addons.go:514] duration metric: took 9.451804219s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1002 22:17:12.654910 1464250 api_server.go:72] duration metric: took 9.460557178s to wait for apiserver process to appear ...
	I1002 22:17:12.654986 1464250 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:17:12.655023 1464250 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1002 22:17:12.664190 1464250 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1002 22:17:12.666133 1464250 api_server.go:141] control plane version: v1.34.1
	I1002 22:17:12.666181 1464250 api_server.go:131] duration metric: took 11.174799ms to wait for apiserver health ...
	I1002 22:17:12.666191 1464250 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:17:12.670701 1464250 system_pods.go:59] 8 kube-system pods found
	I1002 22:17:12.670783 1464250 system_pods.go:61] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:12.670807 1464250 system_pods.go:61] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:17:12.670826 1464250 system_pods.go:61] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:17:12.670849 1464250 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:17:12.670869 1464250 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:17:12.670889 1464250 system_pods.go:61] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:17:12.670922 1464250 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:17:12.670956 1464250 system_pods.go:61] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Running
	I1002 22:17:12.670978 1464250 system_pods.go:74] duration metric: took 4.779311ms to wait for pod list to return data ...
	I1002 22:17:12.670999 1464250 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:17:12.674413 1464250 default_sa.go:45] found service account: "default"
	I1002 22:17:12.674491 1464250 default_sa.go:55] duration metric: took 3.468336ms for default service account to be created ...
	I1002 22:17:12.674515 1464250 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:17:12.680553 1464250 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:12.680630 1464250 system_pods.go:89] "coredns-66bc5c9577-jvqks" [206b4ea5-2f69-433d-bc97-57d4534c0d3e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:12.680659 1464250 system_pods.go:89] "etcd-default-k8s-diff-port-230628" [6c01f491-6c83-4436-b487-b82b7368902b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:17:12.680678 1464250 system_pods.go:89] "kindnet-lvsjr" [f8186228-286b-41f0-a1c6-73ee4855d875] Running
	I1002 22:17:12.680700 1464250 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-230628" [8954fd0e-81fa-491a-a21b-17e0dfa6ee8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:17:12.680721 1464250 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-230628" [3db66c4c-c611-4c0b-ba2f-5675f221a83d] Running
	I1002 22:17:12.680750 1464250 system_pods.go:89] "kube-proxy-4l9vx" [08caf7ea-dac3-4c1f-877c-05db698d12e7] Running
	I1002 22:17:12.680772 1464250 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-230628" [5f207a38-99c3-461c-917a-aba6f4878752] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:17:12.680790 1464250 system_pods.go:89] "storage-provisioner" [ca549a84-19db-4830-a4a8-c9101d37fa26] Running
	I1002 22:17:12.680814 1464250 system_pods.go:126] duration metric: took 6.281332ms to wait for k8s-apps to be running ...
	I1002 22:17:12.680841 1464250 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:17:12.680914 1464250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:17:12.697527 1464250 system_svc.go:56] duration metric: took 16.67669ms WaitForService to wait for kubelet
	I1002 22:17:12.697600 1464250 kubeadm.go:586] duration metric: took 9.503251307s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:17:12.697634 1464250 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:17:12.702159 1464250 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:17:12.702246 1464250 node_conditions.go:123] node cpu capacity is 2
	I1002 22:17:12.702279 1464250 node_conditions.go:105] duration metric: took 4.623843ms to run NodePressure ...
	I1002 22:17:12.702306 1464250 start.go:241] waiting for startup goroutines ...
	I1002 22:17:12.702326 1464250 start.go:246] waiting for cluster config update ...
	I1002 22:17:12.702350 1464250 start.go:255] writing updated cluster config ...
	I1002 22:17:12.702676 1464250 ssh_runner.go:195] Run: rm -f paused
	I1002 22:17:12.706651 1464250 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:17:12.712818 1464250 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jvqks" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:12.666440 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:13.166439 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:13.665999 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:14.165574 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:14.665914 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:15.165512 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:15.665798 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:16.166361 1461647 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:17:16.458779 1461647 kubeadm.go:1113] duration metric: took 4.586825209s to wait for elevateKubeSystemPrivileges
	I1002 22:17:16.458818 1461647 kubeadm.go:402] duration metric: took 28.362818929s to StartCluster
	I1002 22:17:16.458834 1461647 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:16.458901 1461647 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:17:16.460309 1461647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:17:16.460534 1461647 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:17:16.460673 1461647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:17:16.460941 1461647 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:17:16.460978 1461647 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:17:16.461037 1461647 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-080134"
	I1002 22:17:16.461052 1461647 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-080134"
	I1002 22:17:16.461073 1461647 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:17:16.461859 1461647 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:17:16.462004 1461647 addons.go:69] Setting default-storageclass=true in profile "embed-certs-080134"
	I1002 22:17:16.462017 1461647 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-080134"
	I1002 22:17:16.462286 1461647 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:17:16.470097 1461647 out.go:179] * Verifying Kubernetes components...
	I1002 22:17:16.475911 1461647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:17:16.506516 1461647 addons.go:238] Setting addon default-storageclass=true in "embed-certs-080134"
	I1002 22:17:16.506559 1461647 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:17:16.506990 1461647 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:17:16.513185 1461647 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:17:16.516051 1461647 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:16.516076 1461647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:17:16.516144 1461647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:17:16.543068 1461647 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:16.543090 1461647 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:17:16.543152 1461647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:17:16.567762 1461647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34566 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:17:16.579891 1461647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34566 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:17:17.128545 1461647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:17:17.152138 1461647 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 22:17:17.152325 1461647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1002 22:17:14.759931 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:17.220444 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	I1002 22:17:17.432367 1461647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:17:18.492317 1461647 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.339938615s)
	I1002 22:17:18.492378 1461647 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.340158418s)
	I1002 22:17:18.492406 1461647 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 22:17:18.493493 1461647 node_ready.go:35] waiting up to 6m0s for node "embed-certs-080134" to be "Ready" ...
	I1002 22:17:18.492334 1461647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.363709152s)
	I1002 22:17:18.493868 1461647 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.061434042s)
	I1002 22:17:18.546845 1461647 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 22:17:18.549820 1461647 addons.go:514] duration metric: took 2.088826282s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 22:17:18.997495 1461647 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-080134" context rescaled to 1 replicas
	W1002 22:17:20.497277 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:19.718973 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:21.728877 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:22.497342 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:24.998466 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:24.219680 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:26.720462 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:27.497483 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:29.996995 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:28.723923 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:31.220083 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:32.496612 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:34.496804 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:36.996831 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:33.719403 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:36.218685 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:38.996960 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:40.997613 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:38.718898 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:40.719450 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:42.721875 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:43.002209 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:45.500512 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:45.226594 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	W1002 22:17:47.718748 1464250 pod_ready.go:104] pod "coredns-66bc5c9577-jvqks" is not "Ready", error: <nil>
	I1002 22:17:49.718479 1464250 pod_ready.go:94] pod "coredns-66bc5c9577-jvqks" is "Ready"
	I1002 22:17:49.718503 1464250 pod_ready.go:86] duration metric: took 37.005609154s for pod "coredns-66bc5c9577-jvqks" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.721438 1464250 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.726333 1464250 pod_ready.go:94] pod "etcd-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:49.726362 1464250 pod_ready.go:86] duration metric: took 4.900277ms for pod "etcd-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.730900 1464250 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.735612 1464250 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:49.735638 1464250 pod_ready.go:86] duration metric: took 4.702176ms for pod "kube-apiserver-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.737984 1464250 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:49.916418 1464250 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:49.916457 1464250 pod_ready.go:86] duration metric: took 178.447296ms for pod "kube-controller-manager-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:50.116788 1464250 pod_ready.go:83] waiting for pod "kube-proxy-4l9vx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:50.516203 1464250 pod_ready.go:94] pod "kube-proxy-4l9vx" is "Ready"
	I1002 22:17:50.516231 1464250 pod_ready.go:86] duration metric: took 399.418014ms for pod "kube-proxy-4l9vx" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:50.716110 1464250 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:51.116917 1464250 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-230628" is "Ready"
	I1002 22:17:51.116944 1464250 pod_ready.go:86] duration metric: took 400.805238ms for pod "kube-scheduler-default-k8s-diff-port-230628" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:51.116958 1464250 pod_ready.go:40] duration metric: took 38.410228374s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:17:51.181171 1464250 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:17:51.186197 1464250 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-230628" cluster and "default" namespace by default
	W1002 22:17:47.997091 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:50.497007 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:52.497079 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:54.997340 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	W1002 22:17:57.496730 1461647 node_ready.go:57] node "embed-certs-080134" has "Ready":"False" status (will retry)
	I1002 22:17:58.496864 1461647 node_ready.go:49] node "embed-certs-080134" is "Ready"
	I1002 22:17:58.496899 1461647 node_ready.go:38] duration metric: took 40.003354073s for node "embed-certs-080134" to be "Ready" ...
	I1002 22:17:58.496917 1461647 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:17:58.496981 1461647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:17:58.517204 1461647 api_server.go:72] duration metric: took 42.056630926s to wait for apiserver process to appear ...
	I1002 22:17:58.517227 1461647 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:17:58.517246 1461647 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:17:58.526908 1461647 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:17:58.528262 1461647 api_server.go:141] control plane version: v1.34.1
	I1002 22:17:58.528349 1461647 api_server.go:131] duration metric: took 11.114049ms to wait for apiserver health ...
	I1002 22:17:58.528374 1461647 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:17:58.532518 1461647 system_pods.go:59] 8 kube-system pods found
	I1002 22:17:58.532575 1461647 system_pods.go:61] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:58.532582 1461647 system_pods.go:61] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:58.532589 1461647 system_pods.go:61] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:58.532593 1461647 system_pods.go:61] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:58.532602 1461647 system_pods.go:61] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:58.532606 1461647 system_pods.go:61] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:58.532616 1461647 system_pods.go:61] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:58.532622 1461647 system_pods.go:61] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:58.532630 1461647 system_pods.go:74] duration metric: took 4.23825ms to wait for pod list to return data ...
	I1002 22:17:58.532649 1461647 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:17:58.537529 1461647 default_sa.go:45] found service account: "default"
	I1002 22:17:58.537556 1461647 default_sa.go:55] duration metric: took 4.901081ms for default service account to be created ...
	I1002 22:17:58.537566 1461647 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:17:58.637591 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:58.637621 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:58.637628 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:58.637634 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:58.637638 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:58.637644 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:58.637648 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:58.637651 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:58.637657 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:58.637676 1461647 retry.go:31] will retry after 276.140742ms: missing components: kube-dns
	I1002 22:17:58.918387 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:58.918423 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:58.918431 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:58.918439 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:58.918444 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:58.918449 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:58.918453 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:58.918458 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:58.918465 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:58.918482 1461647 retry.go:31] will retry after 317.04108ms: missing components: kube-dns
	I1002 22:17:59.238846 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:59.238886 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:17:59.238893 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:59.238900 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:59.238904 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:59.238909 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:59.238913 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:59.238917 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:59.238923 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:17:59.238942 1461647 retry.go:31] will retry after 307.274217ms: missing components: kube-dns
	I1002 22:17:59.549838 1461647 system_pods.go:86] 8 kube-system pods found
	I1002 22:17:59.549873 1461647 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Running
	I1002 22:17:59.549880 1461647 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running
	I1002 22:17:59.549887 1461647 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:17:59.549892 1461647 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running
	I1002 22:17:59.549897 1461647 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running
	I1002 22:17:59.549900 1461647 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:17:59.549904 1461647 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running
	I1002 22:17:59.549908 1461647 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Running
	I1002 22:17:59.549916 1461647 system_pods.go:126] duration metric: took 1.012345208s to wait for k8s-apps to be running ...
	I1002 22:17:59.549928 1461647 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:17:59.549993 1461647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:17:59.563622 1461647 system_svc.go:56] duration metric: took 13.684817ms WaitForService to wait for kubelet
	I1002 22:17:59.563650 1461647 kubeadm.go:586] duration metric: took 43.103081582s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:17:59.563668 1461647 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:17:59.566851 1461647 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:17:59.566886 1461647 node_conditions.go:123] node cpu capacity is 2
	I1002 22:17:59.566901 1461647 node_conditions.go:105] duration metric: took 3.227388ms to run NodePressure ...
	I1002 22:17:59.566913 1461647 start.go:241] waiting for startup goroutines ...
	I1002 22:17:59.566920 1461647 start.go:246] waiting for cluster config update ...
	I1002 22:17:59.566931 1461647 start.go:255] writing updated cluster config ...
	I1002 22:17:59.567215 1461647 ssh_runner.go:195] Run: rm -f paused
	I1002 22:17:59.571164 1461647 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:17:59.574876 1461647 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.580778 1461647 pod_ready.go:94] pod "coredns-66bc5c9577-n47rb" is "Ready"
	I1002 22:17:59.580806 1461647 pod_ready.go:86] duration metric: took 5.902795ms for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.583585 1461647 pod_ready.go:83] waiting for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.588251 1461647 pod_ready.go:94] pod "etcd-embed-certs-080134" is "Ready"
	I1002 22:17:59.588278 1461647 pod_ready.go:86] duration metric: took 4.667182ms for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.590708 1461647 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.595294 1461647 pod_ready.go:94] pod "kube-apiserver-embed-certs-080134" is "Ready"
	I1002 22:17:59.595357 1461647 pod_ready.go:86] duration metric: took 4.622867ms for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.597593 1461647 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:17:59.975568 1461647 pod_ready.go:94] pod "kube-controller-manager-embed-certs-080134" is "Ready"
	I1002 22:17:59.975595 1461647 pod_ready.go:86] duration metric: took 377.978017ms for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:00.177466 1461647 pod_ready.go:83] waiting for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:00.575634 1461647 pod_ready.go:94] pod "kube-proxy-7lq28" is "Ready"
	I1002 22:18:00.575741 1461647 pod_ready.go:86] duration metric: took 398.246371ms for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:00.776410 1461647 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:01.176131 1461647 pod_ready.go:94] pod "kube-scheduler-embed-certs-080134" is "Ready"
	I1002 22:18:01.176165 1461647 pod_ready.go:86] duration metric: took 399.727181ms for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:18:01.176180 1461647 pod_ready.go:40] duration metric: took 1.604978311s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:18:01.231027 1461647 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:18:01.236820 1461647 out.go:179] * Done! kubectl is now configured to use "embed-certs-080134" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 22:17:58 embed-certs-080134 crio[839]: time="2025-10-02T22:17:58.562359972Z" level=info msg="Created container 9a931e875eefd77374234f837e8aa10d5bbfa36e0edcb0c77419a05111bf8a72: kube-system/coredns-66bc5c9577-n47rb/coredns" id=04b01ecf-0537-4195-ac54-5e512978e8bd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:17:58 embed-certs-080134 crio[839]: time="2025-10-02T22:17:58.563609115Z" level=info msg="Starting container: 9a931e875eefd77374234f837e8aa10d5bbfa36e0edcb0c77419a05111bf8a72" id=e8cbd218-0b92-4230-b326-8d80af1767b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:17:58 embed-certs-080134 crio[839]: time="2025-10-02T22:17:58.565958808Z" level=info msg="Started container" PID=1739 containerID=9a931e875eefd77374234f837e8aa10d5bbfa36e0edcb0c77419a05111bf8a72 description=kube-system/coredns-66bc5c9577-n47rb/coredns id=e8cbd218-0b92-4230-b326-8d80af1767b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3c085151a2bdc02cfdf1261bbe46761297a1984292d415d86135f9becd3253b
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.772735786Z" level=info msg="Running pod sandbox: default/busybox/POD" id=dda5027b-51a4-49e8-91f4-254687a6efa2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.772809754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.78699789Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:164acba741e34d6a1899d437dec240ca43b3bae8039e33849748f1567406d1ee UID:cdae129d-1c93-4cfd-96d9-cff208fdaf10 NetNS:/var/run/netns/cd683f10-d18d-46b5-8c42-44d1d235c1da Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000130aa8}] Aliases:map[]}"
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.787055521Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.799277374Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:164acba741e34d6a1899d437dec240ca43b3bae8039e33849748f1567406d1ee UID:cdae129d-1c93-4cfd-96d9-cff208fdaf10 NetNS:/var/run/netns/cd683f10-d18d-46b5-8c42-44d1d235c1da Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000130aa8}] Aliases:map[]}"
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.800283764Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.803706759Z" level=info msg="Ran pod sandbox 164acba741e34d6a1899d437dec240ca43b3bae8039e33849748f1567406d1ee with infra container: default/busybox/POD" id=dda5027b-51a4-49e8-91f4-254687a6efa2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.805263383Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4679cb61-7fd5-4324-bfb4-d09a8de90974 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.805383947Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4679cb61-7fd5-4324-bfb4-d09a8de90974 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.805420263Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4679cb61-7fd5-4324-bfb4-d09a8de90974 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.807324844Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6375f038-3c86-442e-be61-70e4a6f25cbe name=/runtime.v1.ImageService/PullImage
	Oct 02 22:18:01 embed-certs-080134 crio[839]: time="2025-10-02T22:18:01.809699792Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 22:18:03 embed-certs-080134 crio[839]: time="2025-10-02T22:18:03.997508267Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6375f038-3c86-442e-be61-70e4a6f25cbe name=/runtime.v1.ImageService/PullImage
	Oct 02 22:18:03 embed-certs-080134 crio[839]: time="2025-10-02T22:18:03.998769947Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0bc978c2-b177-40b0-b97c-fef37740116b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:18:04 embed-certs-080134 crio[839]: time="2025-10-02T22:18:04.004580132Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fe3e52e8-1fe4-44d2-9812-e4b8b573d294 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:18:04 embed-certs-080134 crio[839]: time="2025-10-02T22:18:04.012991977Z" level=info msg="Creating container: default/busybox/busybox" id=34786889-8fc4-474f-a41a-1cb6c6c166bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:18:04 embed-certs-080134 crio[839]: time="2025-10-02T22:18:04.01395726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:18:04 embed-certs-080134 crio[839]: time="2025-10-02T22:18:04.027006888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:18:04 embed-certs-080134 crio[839]: time="2025-10-02T22:18:04.028216811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:18:04 embed-certs-080134 crio[839]: time="2025-10-02T22:18:04.052219318Z" level=info msg="Created container dbc718be3998c72b1fe17131027b5a8b349ceb9df059c3447148909900f022c8: default/busybox/busybox" id=34786889-8fc4-474f-a41a-1cb6c6c166bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:18:04 embed-certs-080134 crio[839]: time="2025-10-02T22:18:04.056111269Z" level=info msg="Starting container: dbc718be3998c72b1fe17131027b5a8b349ceb9df059c3447148909900f022c8" id=4467946a-3507-4aab-9687-15c43d43eae1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:18:04 embed-certs-080134 crio[839]: time="2025-10-02T22:18:04.061120024Z" level=info msg="Started container" PID=1796 containerID=dbc718be3998c72b1fe17131027b5a8b349ceb9df059c3447148909900f022c8 description=default/busybox/busybox id=4467946a-3507-4aab-9687-15c43d43eae1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=164acba741e34d6a1899d437dec240ca43b3bae8039e33849748f1567406d1ee
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	dbc718be3998c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   164acba741e34       busybox                                      default
	9a931e875eefd       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   e3c085151a2bd       coredns-66bc5c9577-n47rb                     kube-system
	3f5cd7486db99       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   ac73532ee0b1e       storage-provisioner                          kube-system
	b0e93631d0d4a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   33f83658a20f6       kindnet-mv8z6                                kube-system
	c9aad95c89ee3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   e903e6d50acc4       kube-proxy-7lq28                             kube-system
	804f1aad1cb40       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   ca776577a45b8       kube-scheduler-embed-certs-080134            kube-system
	26b5c1f2b15ab       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   b2d859cac3d48       kube-apiserver-embed-certs-080134            kube-system
	91b5325c01ffe       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   476947975dfd9       kube-controller-manager-embed-certs-080134   kube-system
	f54d6cf7c1130       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   bc9b195223f5f       etcd-embed-certs-080134                      kube-system
	
	
	==> coredns [9a931e875eefd77374234f837e8aa10d5bbfa36e0edcb0c77419a05111bf8a72] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59158 - 10625 "HINFO IN 1758961750670276409.3017216378469516085. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023691473s
	
	
	==> describe nodes <==
	Name:               embed-certs-080134
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-080134
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=embed-certs-080134
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_17_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:17:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-080134
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:18:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:18:02 +0000   Thu, 02 Oct 2025 22:16:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:18:02 +0000   Thu, 02 Oct 2025 22:16:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:18:02 +0000   Thu, 02 Oct 2025 22:16:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:18:02 +0000   Thu, 02 Oct 2025 22:17:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-080134
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f32e3b19fcb40f68e841ec7e170e1eb
	  System UUID:                de46e61e-5f26-496d-bb31-d89253767b5d
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-n47rb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-080134                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-mv8z6                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-080134             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-080134    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-7lq28                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-080134             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Warning  CgroupV1                 75s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node embed-certs-080134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node embed-certs-080134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     75s (x8 over 75s)  kubelet          Node embed-certs-080134 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-080134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-080134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-080134 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-080134 event: Registered Node embed-certs-080134 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-080134 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 21:45] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:46] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f54d6cf7c113002ef190d3da16a1251bc271209997dd329c05a092d58aa13bab] <==
	{"level":"warn","ts":"2025-10-02T22:17:04.336954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.387323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.421228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.479019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.501651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.558353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.624903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.656174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.690900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.724881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.763580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.808759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.842903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.887331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.915307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:04.942270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:05.005810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:05.013402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:05.067117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:05.088543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:05.117985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:05.146837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:05.174318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:05.197421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:17:05.393893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42246","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:18:12 up  7:00,  0 user,  load average: 2.57, 2.52, 2.16
	Linux embed-certs-080134 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b0e93631d0d4a2f2a417b58936efafcd7cd00c3880913d07964fe8f965250c12] <==
	I1002 22:17:17.353234       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:17:17.353527       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 22:17:17.406425       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:17:17.406454       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:17:17.406467       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:17:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:17:17.608408       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:17:17.608426       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:17:17.608434       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:17:17.608737       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:17:47.608008       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:17:47.608951       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 22:17:47.609053       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:17:47.609186       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 22:17:49.108734       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:17:49.108771       1 metrics.go:72] Registering metrics
	I1002 22:17:49.108843       1 controller.go:711] "Syncing nftables rules"
	I1002 22:17:57.614574       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:17:57.614632       1 main.go:301] handling current node
	I1002 22:18:07.609638       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:18:07.609675       1 main.go:301] handling current node
	
	
	==> kube-apiserver [26b5c1f2b15abe7a70a089791c486f3dbf3a07a03ebbe859dbc746a99892c43b] <==
	I1002 22:17:07.463693       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 22:17:07.463735       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1002 22:17:07.478429       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:17:07.478647       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:17:07.478720       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 22:17:07.535891       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:17:07.536043       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:17:07.859924       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 22:17:07.895220       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 22:17:07.895322       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:17:09.378025       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:17:09.451835       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:17:09.674512       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 22:17:09.718759       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1002 22:17:09.720528       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 22:17:09.743812       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 22:17:10.358434       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:17:10.527099       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:17:10.583482       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 22:17:10.612675       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 22:17:16.058575       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1002 22:17:16.364203       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 22:17:16.600765       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:17:16.703720       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1002 22:18:10.590994       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:46162: use of closed network connection
	
	
	==> kube-controller-manager [91b5325c01ffecf006f71c70061729ca17e205efcca32629228fa0d721c2dc68] <==
	I1002 22:17:15.378793       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 22:17:15.379276       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 22:17:15.384886       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 22:17:15.387714       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 22:17:15.392580       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-080134" podCIDRs=["10.244.0.0/24"]
	I1002 22:17:15.392933       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:17:15.398834       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 22:17:15.399054       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 22:17:15.399168       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-080134"
	I1002 22:17:15.399227       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1002 22:17:15.400476       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 22:17:15.402374       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 22:17:15.402439       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 22:17:15.402458       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 22:17:15.418144       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 22:17:15.422358       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:17:15.449861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:17:15.449892       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:17:15.449900       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 22:17:15.449981       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 22:17:15.452400       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1002 22:17:15.457647       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:17:15.459243       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:17:15.467469       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:18:00.407311       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c9aad95c89ee3b63cc316b30b98ae1a43d99995f551037472e03cce35e6ad0a5] <==
	I1002 22:17:17.417649       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:17:17.583949       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:17:17.687498       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:17:17.687539       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 22:17:17.687612       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:17:17.859076       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:17:17.859199       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:17:17.866840       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:17:17.867223       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:17:17.867595       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:17:17.873108       1 config.go:200] "Starting service config controller"
	I1002 22:17:17.873126       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:17:17.873139       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:17:17.873143       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:17:17.873151       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:17:17.873154       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:17:17.873805       1 config.go:309] "Starting node config controller"
	I1002 22:17:17.873813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:17:17.873821       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:17:17.975821       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:17:17.975864       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:17:17.975875       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [804f1aad1cb40d9a149111a88cbc8254c45746d4f9dcbab0af71e0b92724b2cf] <==
	E1002 22:17:07.545128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 22:17:07.545263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 22:17:07.545397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 22:17:07.545504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 22:17:07.545606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 22:17:07.545725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 22:17:07.545831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 22:17:07.546025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 22:17:07.558874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 22:17:07.559054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 22:17:07.559175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 22:17:07.559296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 22:17:07.559403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 22:17:07.559607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 22:17:07.559763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 22:17:08.515892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 22:17:08.600739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 22:17:08.608945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 22:17:08.630652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 22:17:08.671027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 22:17:08.705877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 22:17:08.757918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 22:17:08.776463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 22:17:08.844139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1002 22:17:10.893517       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: I1002 22:17:16.145971    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/773ab73d-9ba2-45c8-8731-99bf5e77a39c-lib-modules\") pod \"kube-proxy-7lq28\" (UID: \"773ab73d-9ba2-45c8-8731-99bf5e77a39c\") " pod="kube-system/kube-proxy-7lq28"
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: I1002 22:17:16.251778    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/11af225f-d46c-4749-ae66-b539ef3deafc-cni-cfg\") pod \"kindnet-mv8z6\" (UID: \"11af225f-d46c-4749-ae66-b539ef3deafc\") " pod="kube-system/kindnet-mv8z6"
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: I1002 22:17:16.251945    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8jn\" (UniqueName: \"kubernetes.io/projected/11af225f-d46c-4749-ae66-b539ef3deafc-kube-api-access-2p8jn\") pod \"kindnet-mv8z6\" (UID: \"11af225f-d46c-4749-ae66-b539ef3deafc\") " pod="kube-system/kindnet-mv8z6"
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: I1002 22:17:16.251985    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11af225f-d46c-4749-ae66-b539ef3deafc-xtables-lock\") pod \"kindnet-mv8z6\" (UID: \"11af225f-d46c-4749-ae66-b539ef3deafc\") " pod="kube-system/kindnet-mv8z6"
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: I1002 22:17:16.252220    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11af225f-d46c-4749-ae66-b539ef3deafc-lib-modules\") pod \"kindnet-mv8z6\" (UID: \"11af225f-d46c-4749-ae66-b539ef3deafc\") " pod="kube-system/kindnet-mv8z6"
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: E1002 22:17:16.407747    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: E1002 22:17:16.407787    1308 projected.go:196] Error preparing data for projected volume kube-api-access-g6n9x for pod kube-system/kube-proxy-7lq28: configmap "kube-root-ca.crt" not found
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: E1002 22:17:16.407863    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/773ab73d-9ba2-45c8-8731-99bf5e77a39c-kube-api-access-g6n9x podName:773ab73d-9ba2-45c8-8731-99bf5e77a39c nodeName:}" failed. No retries permitted until 2025-10-02 22:17:16.907835102 +0000 UTC m=+6.502399933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g6n9x" (UniqueName: "kubernetes.io/projected/773ab73d-9ba2-45c8-8731-99bf5e77a39c-kube-api-access-g6n9x") pod "kube-proxy-7lq28" (UID: "773ab73d-9ba2-45c8-8731-99bf5e77a39c") : configmap "kube-root-ca.crt" not found
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: E1002 22:17:16.450539    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: E1002 22:17:16.450577    1308 projected.go:196] Error preparing data for projected volume kube-api-access-2p8jn for pod kube-system/kindnet-mv8z6: configmap "kube-root-ca.crt" not found
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: E1002 22:17:16.450637    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/11af225f-d46c-4749-ae66-b539ef3deafc-kube-api-access-2p8jn podName:11af225f-d46c-4749-ae66-b539ef3deafc nodeName:}" failed. No retries permitted until 2025-10-02 22:17:16.950616869 +0000 UTC m=+6.545181708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2p8jn" (UniqueName: "kubernetes.io/projected/11af225f-d46c-4749-ae66-b539ef3deafc-kube-api-access-2p8jn") pod "kindnet-mv8z6" (UID: "11af225f-d46c-4749-ae66-b539ef3deafc") : configmap "kube-root-ca.crt" not found
	Oct 02 22:17:16 embed-certs-080134 kubelet[1308]: I1002 22:17:16.964622    1308 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 22:17:17 embed-certs-080134 kubelet[1308]: W1002 22:17:17.031959    1308 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/crio-e903e6d50acc4de61c62cbd316101c2d94ad5eb89429725c468c11c3fd40c824 WatchSource:0}: Error finding container e903e6d50acc4de61c62cbd316101c2d94ad5eb89429725c468c11c3fd40c824: Status 404 returned error can't find the container with id e903e6d50acc4de61c62cbd316101c2d94ad5eb89429725c468c11c3fd40c824
	Oct 02 22:17:18 embed-certs-080134 kubelet[1308]: I1002 22:17:18.257295    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7lq28" podStartSLOduration=2.2572764 podStartE2EDuration="2.2572764s" podCreationTimestamp="2025-10-02 22:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:17:18.257083082 +0000 UTC m=+7.851647921" watchObservedRunningTime="2025-10-02 22:17:18.2572764 +0000 UTC m=+7.851841230"
	Oct 02 22:17:18 embed-certs-080134 kubelet[1308]: I1002 22:17:18.257404    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mv8z6" podStartSLOduration=2.257399532 podStartE2EDuration="2.257399532s" podCreationTimestamp="2025-10-02 22:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:17:18.22480471 +0000 UTC m=+7.819369541" watchObservedRunningTime="2025-10-02 22:17:18.257399532 +0000 UTC m=+7.851964379"
	Oct 02 22:17:58 embed-certs-080134 kubelet[1308]: I1002 22:17:58.093138    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 02 22:17:58 embed-certs-080134 kubelet[1308]: I1002 22:17:58.166422    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3bfae264-fe18-4caf-a609-570bc75daf7d-tmp\") pod \"storage-provisioner\" (UID: \"3bfae264-fe18-4caf-a609-570bc75daf7d\") " pod="kube-system/storage-provisioner"
	Oct 02 22:17:58 embed-certs-080134 kubelet[1308]: I1002 22:17:58.166489    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6sbn\" (UniqueName: \"kubernetes.io/projected/3bfae264-fe18-4caf-a609-570bc75daf7d-kube-api-access-x6sbn\") pod \"storage-provisioner\" (UID: \"3bfae264-fe18-4caf-a609-570bc75daf7d\") " pod="kube-system/storage-provisioner"
	Oct 02 22:17:58 embed-certs-080134 kubelet[1308]: I1002 22:17:58.267495    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a9aa9ca-1be0-44d0-b70d-54990bd49fa9-config-volume\") pod \"coredns-66bc5c9577-n47rb\" (UID: \"1a9aa9ca-1be0-44d0-b70d-54990bd49fa9\") " pod="kube-system/coredns-66bc5c9577-n47rb"
	Oct 02 22:17:58 embed-certs-080134 kubelet[1308]: I1002 22:17:58.267544    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj2hd\" (UniqueName: \"kubernetes.io/projected/1a9aa9ca-1be0-44d0-b70d-54990bd49fa9-kube-api-access-bj2hd\") pod \"coredns-66bc5c9577-n47rb\" (UID: \"1a9aa9ca-1be0-44d0-b70d-54990bd49fa9\") " pod="kube-system/coredns-66bc5c9577-n47rb"
	Oct 02 22:17:58 embed-certs-080134 kubelet[1308]: W1002 22:17:58.461009    1308 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/crio-ac73532ee0b1e8b37d50e74da94b6c5e17c7f7912fb7ceb0044f35f94c68ca1d WatchSource:0}: Error finding container ac73532ee0b1e8b37d50e74da94b6c5e17c7f7912fb7ceb0044f35f94c68ca1d: Status 404 returned error can't find the container with id ac73532ee0b1e8b37d50e74da94b6c5e17c7f7912fb7ceb0044f35f94c68ca1d
	Oct 02 22:17:58 embed-certs-080134 kubelet[1308]: W1002 22:17:58.474966    1308 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/crio-e3c085151a2bdc02cfdf1261bbe46761297a1984292d415d86135f9becd3253b WatchSource:0}: Error finding container e3c085151a2bdc02cfdf1261bbe46761297a1984292d415d86135f9becd3253b: Status 404 returned error can't find the container with id e3c085151a2bdc02cfdf1261bbe46761297a1984292d415d86135f9becd3253b
	Oct 02 22:17:59 embed-certs-080134 kubelet[1308]: I1002 22:17:59.311334    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-n47rb" podStartSLOduration=43.311313358 podStartE2EDuration="43.311313358s" podCreationTimestamp="2025-10-02 22:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:17:59.294818327 +0000 UTC m=+48.889383166" watchObservedRunningTime="2025-10-02 22:17:59.311313358 +0000 UTC m=+48.905878197"
	Oct 02 22:17:59 embed-certs-080134 kubelet[1308]: I1002 22:17:59.330913    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.33089078 podStartE2EDuration="41.33089078s" podCreationTimestamp="2025-10-02 22:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:17:59.312059503 +0000 UTC m=+48.906624334" watchObservedRunningTime="2025-10-02 22:17:59.33089078 +0000 UTC m=+48.925455610"
	Oct 02 22:18:01 embed-certs-080134 kubelet[1308]: I1002 22:18:01.589270    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcnzb\" (UniqueName: \"kubernetes.io/projected/cdae129d-1c93-4cfd-96d9-cff208fdaf10-kube-api-access-mcnzb\") pod \"busybox\" (UID: \"cdae129d-1c93-4cfd-96d9-cff208fdaf10\") " pod="default/busybox"
	
	
	==> storage-provisioner [3f5cd7486db99b3a4962ec90707bb274d2153ed96f1f71cd1322a3e1d7589113] <==
	I1002 22:17:58.571384       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 22:17:58.596635       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:17:58.596761       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 22:17:58.602324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:58.613693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:17:58.614151       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:17:58.614377       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-080134_4e22eb91-e019-4e87-8c88-dbabb59ee020!
	I1002 22:17:58.621686       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5c4053a-7381-4524-a450-046d4cf76d2d", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-080134_4e22eb91-e019-4e87-8c88-dbabb59ee020 became leader
	W1002 22:17:58.622231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:17:58.641041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:17:58.717146       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-080134_4e22eb91-e019-4e87-8c88-dbabb59ee020!
	W1002 22:18:00.643796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:00.648473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:02.655996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:02.666695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:04.670846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:04.685089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:06.693394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:06.701664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:08.704963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:08.710282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:10.713387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:10.737164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:12.741882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:18:12.758150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-080134 -n embed-certs-080134
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-080134 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (274.863279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:19:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-975002 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-975002 describe deploy/metrics-server -n kube-system: exit status 1 (81.683877ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-975002 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-975002
helpers_test.go:243: (dbg) docker inspect no-preload-975002:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763",
	        "Created": "2025-10-02T22:18:14.898358454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1469415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:18:14.989043652Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/hostname",
	        "HostsPath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/hosts",
	        "LogPath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763-json.log",
	        "Name": "/no-preload-975002",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-975002:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-975002",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763",
	                "LowerDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-975002",
	                "Source": "/var/lib/docker/volumes/no-preload-975002/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-975002",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-975002",
	                "name.minikube.sigs.k8s.io": "no-preload-975002",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb773ce26f8b2fb4bf7ab05bd9d83e31a73274514785bdef34a76bd4c3665b70",
	            "SandboxKey": "/var/run/docker/netns/bb773ce26f8b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34576"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34577"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34580"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34578"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34579"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-975002": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:4d:07:68:11:10",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bdf5cacca0e9b177bb286cc49833ddf6be2feeac26e6da7eb90c632741658614",
	                    "EndpointID": "cbfb5737c029b8df44f1884b641fae6643d1fb52a1bce25195ff4c9dfc989fac",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-975002",
	                        "845f3e6dfe04"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-975002 -n no-preload-975002
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-975002 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-975002 logs -n 25: (1.470195239s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-173127 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ delete  │ -p cert-expiration-247949                                                                                                                                                                                                                     │ cert-expiration-247949       │ jenkins │ v1.37.0 │ 02 Oct 25 22:14 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-173127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ image   │ old-k8s-version-173127 image list --format=json                                                                                                                                                                                               │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ pause   │ -p old-k8s-version-173127 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-230628 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-230628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-230628 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ pause   │ -p default-k8s-diff-port-230628 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-080134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ delete  │ -p disable-driver-mounts-607037                                                                                                                                                                                                               │ disable-driver-mounts-607037 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ stop    │ -p embed-certs-080134 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable dashboard -p embed-certs-080134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:18:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:18:27.241377 1471394 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:18:27.241590 1471394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:18:27.241616 1471394 out.go:374] Setting ErrFile to fd 2...
	I1002 22:18:27.241635 1471394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:18:27.241916 1471394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:18:27.242344 1471394 out.go:368] Setting JSON to false
	I1002 22:18:27.243293 1471394 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25233,"bootTime":1759418275,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:18:27.243384 1471394 start.go:140] virtualization:  
	I1002 22:18:27.248909 1471394 out.go:179] * [embed-certs-080134] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:18:27.252367 1471394 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:18:27.252410 1471394 notify.go:220] Checking for updates...
	I1002 22:18:27.259480 1471394 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:18:27.262681 1471394 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:18:27.265720 1471394 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:18:27.268662 1471394 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:18:27.271638 1471394 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:18:27.275118 1471394 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:18:27.275745 1471394 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:18:27.313492 1471394 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:18:27.313615 1471394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:18:27.410660 1471394 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-02 22:18:27.397108084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:18:27.410768 1471394 docker.go:318] overlay module found
	I1002 22:18:27.413943 1471394 out.go:179] * Using the docker driver based on existing profile
	I1002 22:18:27.416868 1471394 start.go:304] selected driver: docker
	I1002 22:18:27.416888 1471394 start.go:924] validating driver "docker" against &{Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:27.416986 1471394 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:18:27.417693 1471394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:18:27.511305 1471394 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-02 22:18:27.50234397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:18:27.511649 1471394 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:18:27.511690 1471394 cni.go:84] Creating CNI manager for ""
	I1002 22:18:27.511762 1471394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:18:27.511806 1471394 start.go:348] cluster config:
	{Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:27.516554 1471394 out.go:179] * Starting "embed-certs-080134" primary control-plane node in "embed-certs-080134" cluster
	I1002 22:18:27.519781 1471394 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:18:27.522245 1471394 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:18:27.525376 1471394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:18:27.525450 1471394 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:18:27.525463 1471394 cache.go:58] Caching tarball of preloaded images
	I1002 22:18:27.525477 1471394 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:18:27.525612 1471394 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:18:27.525622 1471394 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:18:27.525733 1471394 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/config.json ...
	I1002 22:18:27.550567 1471394 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:18:27.550596 1471394 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:18:27.550618 1471394 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:18:27.550642 1471394 start.go:360] acquireMachinesLock for embed-certs-080134: {Name:mkb3c88b79da323c6aaa02ac6130cdaf0d74178c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:18:27.550700 1471394 start.go:364] duration metric: took 35.61µs to acquireMachinesLock for "embed-certs-080134"
	I1002 22:18:27.550727 1471394 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:18:27.550742 1471394 fix.go:54] fixHost starting: 
	I1002 22:18:27.551007 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:27.571312 1471394 fix.go:112] recreateIfNeeded on embed-certs-080134: state=Stopped err=<nil>
	W1002 22:18:27.571343 1471394 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:18:23.619484 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1002 22:18:23.646585 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 22:18:23.646707 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1002 22:18:23.698774 1469015 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1002 22:18:23.699086 1469015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:25.389499 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.74273701s)
	I1002 22:18:25.389529 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1002 22:18:25.389567 1469015 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1002 22:18:25.389646 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1002 22:18:25.389734 1469015 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.690609487s)
	I1002 22:18:25.389787 1469015 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1002 22:18:25.389818 1469015 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:25.389864 1469015 ssh_runner.go:195] Run: which crictl
	I1002 22:18:27.620887 1469015 ssh_runner.go:235] Completed: which crictl: (2.230998103s)
	I1002 22:18:27.620969 1469015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:27.621111 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.231448285s)
	I1002 22:18:27.621127 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1002 22:18:27.621143 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 22:18:27.621168 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 22:18:27.575412 1471394 out.go:252] * Restarting existing docker container for "embed-certs-080134" ...
	I1002 22:18:27.575540 1471394 cli_runner.go:164] Run: docker start embed-certs-080134
	I1002 22:18:27.864000 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:27.887726 1471394 kic.go:430] container "embed-certs-080134" state is running.
	I1002 22:18:27.888104 1471394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-080134
	I1002 22:18:27.917730 1471394 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/config.json ...
	I1002 22:18:27.917985 1471394 machine.go:93] provisionDockerMachine start ...
	I1002 22:18:27.918062 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:27.968040 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:27.968363 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:27.968372 1471394 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:18:27.971583 1471394 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:18:31.121910 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-080134
	
	I1002 22:18:31.121934 1471394 ubuntu.go:182] provisioning hostname "embed-certs-080134"
	I1002 22:18:31.121996 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:31.144299 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:31.144606 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:31.144618 1471394 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-080134 && echo "embed-certs-080134" | sudo tee /etc/hostname
	I1002 22:18:31.312251 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-080134
	
	I1002 22:18:31.312326 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:31.335751 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:31.336056 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:31.336080 1471394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-080134' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-080134/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-080134' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:18:31.486452 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:18:31.486527 1471394 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:18:31.486563 1471394 ubuntu.go:190] setting up certificates
	I1002 22:18:31.486604 1471394 provision.go:84] configureAuth start
	I1002 22:18:31.486710 1471394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-080134
	I1002 22:18:31.511138 1471394 provision.go:143] copyHostCerts
	I1002 22:18:31.511211 1471394 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:18:31.511228 1471394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:18:31.511300 1471394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:18:31.511399 1471394 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:18:31.511404 1471394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:18:31.511430 1471394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:18:31.511493 1471394 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:18:31.511498 1471394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:18:31.511522 1471394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:18:31.511575 1471394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.embed-certs-080134 san=[127.0.0.1 192.168.85.2 embed-certs-080134 localhost minikube]
	I1002 22:18:31.893293 1471394 provision.go:177] copyRemoteCerts
	I1002 22:18:31.893359 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:18:31.893409 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:31.911758 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.014147 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:18:32.050758 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:18:32.080185 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 22:18:32.105189 1471394 provision.go:87] duration metric: took 618.544299ms to configureAuth
	I1002 22:18:32.105275 1471394 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:18:32.105519 1471394 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:18:32.105705 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.124928 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:32.125251 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:32.125267 1471394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:18:29.233061 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.611867781s)
	I1002 22:18:29.233089 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1002 22:18:29.233107 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 22:18:29.233152 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 22:18:29.233217 1469015 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.612233888s)
	I1002 22:18:29.233255 1469015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:30.533897 1469015 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.300616469s)
	I1002 22:18:30.533970 1469015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:30.534151 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.300983641s)
	I1002 22:18:30.534167 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1002 22:18:30.534184 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 22:18:30.534215 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 22:18:32.211052 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.676812997s)
	I1002 22:18:32.211081 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1002 22:18:32.211098 1469015 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1002 22:18:32.211148 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1002 22:18:32.211195 1469015 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.677204678s)
	I1002 22:18:32.211239 1469015 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 22:18:32.211326 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1002 22:18:32.472964 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:18:32.472994 1471394 machine.go:96] duration metric: took 4.5549988s to provisionDockerMachine
	I1002 22:18:32.473005 1471394 start.go:293] postStartSetup for "embed-certs-080134" (driver="docker")
	I1002 22:18:32.473016 1471394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:18:32.473075 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:18:32.473112 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.504127 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.607008 1471394 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:18:32.610932 1471394 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:18:32.610957 1471394 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:18:32.610967 1471394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:18:32.611017 1471394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:18:32.611092 1471394 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:18:32.611198 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:18:32.619379 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:18:32.638857 1471394 start.go:296] duration metric: took 165.837176ms for postStartSetup
	I1002 22:18:32.639023 1471394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:18:32.639097 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.661986 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.756088 1471394 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:18:32.763175 1471394 fix.go:56] duration metric: took 5.212432494s for fixHost
	I1002 22:18:32.763197 1471394 start.go:83] releasing machines lock for "embed-certs-080134", held for 5.212483004s
	I1002 22:18:32.763275 1471394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-080134
	I1002 22:18:32.789872 1471394 ssh_runner.go:195] Run: cat /version.json
	I1002 22:18:32.789929 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.790254 1471394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:18:32.790302 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.828586 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.836099 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:33.043006 1471394 ssh_runner.go:195] Run: systemctl --version
	I1002 22:18:33.050763 1471394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:18:33.108072 1471394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:18:33.113998 1471394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:18:33.114128 1471394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:18:33.124321 1471394 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:18:33.124388 1471394 start.go:495] detecting cgroup driver to use...
	I1002 22:18:33.124436 1471394 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:18:33.124526 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:18:33.142258 1471394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:18:33.157706 1471394 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:18:33.157778 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:18:33.175228 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:18:33.190313 1471394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:18:33.346607 1471394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:18:33.532428 1471394 docker.go:234] disabling docker service ...
	I1002 22:18:33.532502 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:18:33.563245 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:18:33.583139 1471394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:18:33.750538 1471394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:18:33.909095 1471394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:18:33.926180 1471394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:18:33.956759 1471394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:18:33.956833 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:33.974951 1471394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:18:33.975030 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:33.988735 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:33.998132 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.011278 1471394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:18:34.023270 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.038545 1471394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.056908 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.071629 1471394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:18:34.080896 1471394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:18:34.089541 1471394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:34.286970 1471394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:18:34.854866 1471394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:18:34.854996 1471394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:18:34.860113 1471394 start.go:563] Will wait 60s for crictl version
	I1002 22:18:34.860263 1471394 ssh_runner.go:195] Run: which crictl
	I1002 22:18:34.865256 1471394 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:18:34.903231 1471394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:18:34.903362 1471394 ssh_runner.go:195] Run: crio --version
	I1002 22:18:34.963890 1471394 ssh_runner.go:195] Run: crio --version
	I1002 22:18:35.003885 1471394 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:18:35.007109 1471394 cli_runner.go:164] Run: docker network inspect embed-certs-080134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:18:35.031834 1471394 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 22:18:35.036431 1471394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:18:35.049506 1471394 kubeadm.go:883] updating cluster {Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:18:35.049618 1471394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:18:35.049727 1471394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:18:35.097850 1471394 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:18:35.097880 1471394 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:18:35.097944 1471394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:18:35.133423 1471394 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:18:35.133449 1471394 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:18:35.133457 1471394 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 22:18:35.133555 1471394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-080134 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:18:35.133645 1471394 ssh_runner.go:195] Run: crio config
	I1002 22:18:35.204040 1471394 cni.go:84] Creating CNI manager for ""
	I1002 22:18:35.204063 1471394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:18:35.204081 1471394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:18:35.204128 1471394 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-080134 NodeName:embed-certs-080134 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:18:35.204371 1471394 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-080134"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:18:35.204464 1471394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:18:35.213713 1471394 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:18:35.213818 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:18:35.222933 1471394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1002 22:18:35.247563 1471394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:18:35.270076 1471394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1002 22:18:35.285413 1471394 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:18:35.289522 1471394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:18:35.312724 1471394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:35.467847 1471394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:18:35.484391 1471394 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134 for IP: 192.168.85.2
	I1002 22:18:35.484464 1471394 certs.go:195] generating shared ca certs ...
	I1002 22:18:35.484494 1471394 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:35.484661 1471394 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:18:35.484747 1471394 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:18:35.484772 1471394 certs.go:257] generating profile certs ...
	I1002 22:18:35.484898 1471394 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/client.key
	I1002 22:18:35.485001 1471394 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/apiserver.key.248cab64
	I1002 22:18:35.485075 1471394 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/proxy-client.key
	I1002 22:18:35.485215 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:18:35.485273 1471394 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:18:35.485298 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:18:35.485348 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:18:35.485397 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:18:35.485447 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:18:35.485514 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:18:35.486237 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:18:35.506450 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:18:35.567521 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:18:35.627879 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:18:35.730475 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 22:18:35.779333 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:18:35.819379 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:18:35.844088 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 22:18:35.873678 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:18:35.894361 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:18:35.912786 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:18:35.930978 1471394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:18:35.944370 1471394 ssh_runner.go:195] Run: openssl version
	I1002 22:18:35.951356 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:18:35.959750 1471394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:35.964120 1471394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:35.964262 1471394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:36.007111 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:18:36.016761 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:18:36.026506 1471394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:18:36.031576 1471394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:18:36.031699 1471394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:18:36.075194 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:18:36.086066 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:18:36.095588 1471394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:18:36.100529 1471394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:18:36.100692 1471394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:18:36.144821 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:18:36.153776 1471394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:18:36.158258 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:18:36.245549 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:18:36.330215 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:18:36.420875 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:18:36.568928 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:18:36.711006 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:18:36.796194 1471394 kubeadm.go:400] StartCluster: {Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:36.796345 1471394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:18:36.796474 1471394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:18:36.892149 1471394 cri.go:89] found id: "31cbedff1fe1e823d96eb4858d03eb2aa72e9a77c0ffc8651909298dfb2f2c47"
	I1002 22:18:36.892223 1471394 cri.go:89] found id: "7a11f9726f5e6292b466176dd31607960e89c89de96d69efda2e47f6d6bf355d"
	I1002 22:18:36.892241 1471394 cri.go:89] found id: "a71c5b7dea391f98f7c0a495ab685f227e17c98ff9866d2a9493764e53e86da5"
	I1002 22:18:36.892267 1471394 cri.go:89] found id: "abb55666df5bda820c6e6516b076744eb1e7ce0ae95bd5d8e416c3dca835aa10"
	I1002 22:18:36.892300 1471394 cri.go:89] found id: ""
	I1002 22:18:36.892388 1471394 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:18:36.915069 1471394 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:18:36Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:18:36.915204 1471394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:18:36.932678 1471394 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:18:36.932756 1471394 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:18:36.932847 1471394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:18:36.947880 1471394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:18:36.948438 1471394 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-080134" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:18:36.948600 1471394 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-080134" cluster setting kubeconfig missing "embed-certs-080134" context setting]
	I1002 22:18:36.948954 1471394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:36.950584 1471394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:18:36.961307 1471394 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 22:18:36.961392 1471394 kubeadm.go:601] duration metric: took 28.610218ms to restartPrimaryControlPlane
	I1002 22:18:36.961415 1471394 kubeadm.go:402] duration metric: took 165.231557ms to StartCluster
	I1002 22:18:36.961458 1471394 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:36.961553 1471394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:18:36.962655 1471394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:36.962911 1471394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:18:36.963445 1471394 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:18:36.963528 1471394 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-080134"
	I1002 22:18:36.963542 1471394 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-080134"
	W1002 22:18:36.963547 1471394 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:18:36.963571 1471394 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:18:36.964072 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:36.964329 1471394 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:18:36.964414 1471394 addons.go:69] Setting dashboard=true in profile "embed-certs-080134"
	I1002 22:18:36.964442 1471394 addons.go:238] Setting addon dashboard=true in "embed-certs-080134"
	W1002 22:18:36.964463 1471394 addons.go:247] addon dashboard should already be in state true
	I1002 22:18:36.964512 1471394 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:18:36.964975 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:36.965467 1471394 addons.go:69] Setting default-storageclass=true in profile "embed-certs-080134"
	I1002 22:18:36.965492 1471394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-080134"
	I1002 22:18:36.965772 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:36.969471 1471394 out.go:179] * Verifying Kubernetes components...
	I1002 22:18:36.974201 1471394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:37.015067 1471394 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:37.021148 1471394 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:18:37.021174 1471394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:18:37.021254 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:37.024388 1471394 addons.go:238] Setting addon default-storageclass=true in "embed-certs-080134"
	W1002 22:18:37.024435 1471394 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:18:37.024487 1471394 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:18:37.025002 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:37.054125 1471394 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:18:37.059462 1471394 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 22:18:37.067041 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:18:37.067076 1471394 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:18:37.067158 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:37.075265 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:37.098924 1471394 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:18:37.098943 1471394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:18:37.099010 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:37.117464 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:37.143671 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:37.089260 1469015 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (4.877913042s)
	I1002 22:18:37.089296 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1002 22:18:37.089321 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1002 22:18:37.089439 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.878271526s)
	I1002 22:18:37.089449 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1002 22:18:37.236522 1469015 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 22:18:37.236596 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 22:18:38.212691 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 22:18:38.212727 1469015 cache_images.go:124] Successfully loaded all cached images
	I1002 22:18:38.212734 1469015 cache_images.go:93] duration metric: took 15.924038027s to LoadCachedImages
	I1002 22:18:38.212745 1469015 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 22:18:38.212836 1469015 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-975002 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:18:38.212921 1469015 ssh_runner.go:195] Run: crio config
	I1002 22:18:38.318444 1469015 cni.go:84] Creating CNI manager for ""
	I1002 22:18:38.318513 1469015 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:18:38.318545 1469015 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:18:38.318596 1469015 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-975002 NodeName:no-preload-975002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:18:38.318772 1469015 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-975002"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:18:38.318873 1469015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:18:38.331027 1469015 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1002 22:18:38.331144 1469015 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1002 22:18:38.348565 1469015 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1002 22:18:38.348774 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1002 22:18:38.349315 1469015 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1002 22:18:38.349739 1469015 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1002 22:18:38.355088 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1002 22:18:38.355120 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1002 22:18:37.478882 1471394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:18:37.511961 1471394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:18:37.569298 1471394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:18:37.600558 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:18:37.600586 1471394 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:18:37.746513 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:18:37.746540 1471394 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:18:37.856796 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:18:37.856820 1471394 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:18:37.949027 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:18:37.949050 1471394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:18:38.030540 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:18:38.030566 1471394 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:18:38.065948 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:18:38.065977 1471394 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:18:38.117556 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:18:38.117636 1471394 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:18:38.153468 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:18:38.153488 1471394 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:18:38.174710 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:18:38.174731 1471394 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:18:38.194835 1471394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:18:39.419779 1469015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:18:39.450053 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1002 22:18:39.460331 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1002 22:18:39.460370 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1002 22:18:39.645411 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1002 22:18:39.675855 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1002 22:18:39.675899 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1002 22:18:40.325202 1469015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:18:40.335086 1469015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 22:18:40.357734 1469015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:18:40.376320 1469015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 22:18:40.402686 1469015 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:18:40.407035 1469015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:18:40.426631 1469015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:40.633489 1469015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:18:40.673001 1469015 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002 for IP: 192.168.76.2
	I1002 22:18:40.673026 1469015 certs.go:195] generating shared ca certs ...
	I1002 22:18:40.673042 1469015 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:40.673183 1469015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:18:40.673227 1469015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:18:40.673237 1469015 certs.go:257] generating profile certs ...
	I1002 22:18:40.673295 1469015 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.key
	I1002 22:18:40.673312 1469015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt with IP's: []
	I1002 22:18:41.128375 1469015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt ...
	I1002 22:18:41.128406 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: {Name:mkfb502c73b4ad79c2095821374cc38c54249654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.128600 1469015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.key ...
	I1002 22:18:41.128616 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.key: {Name:mkd38cc7fd83e5057b4c9d7fd2e30313c24ba9cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.128721 1469015 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57
	I1002 22:18:41.128741 1469015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 22:18:41.508807 1469015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57 ...
	I1002 22:18:41.508837 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57: {Name:mkdebba222698e9ad33dbf8d5a6cf31ef95e43dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.509040 1469015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57 ...
	I1002 22:18:41.509056 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57: {Name:mk918c9e23e54fec10949ffff53c7a04638071be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.509152 1469015 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt
	I1002 22:18:41.509231 1469015 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key
	I1002 22:18:41.509292 1469015 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key
	I1002 22:18:41.509314 1469015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt with IP's: []
	I1002 22:18:41.602000 1469015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt ...
	I1002 22:18:41.602055 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt: {Name:mk609465dd816034a4031d70c3a4ad97b9295f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.602227 1469015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key ...
	I1002 22:18:41.602243 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key: {Name:mk90272abbfdd0f5e7ed179e6e268a568e1c3a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.602425 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:18:41.602469 1469015 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:18:41.602480 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:18:41.602507 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:18:41.602530 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:18:41.602553 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:18:41.602595 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:18:41.603153 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:18:41.622994 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:18:41.668729 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:18:41.733335 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:18:41.770774 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 22:18:41.811741 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:18:41.833957 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:18:41.866381 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:18:41.897084 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:18:41.933013 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:18:41.975831 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:18:42.007096 1469015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:18:42.040238 1469015 ssh_runner.go:195] Run: openssl version
	I1002 22:18:42.047701 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:18:42.058795 1469015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:18:42.063942 1469015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:18:42.064034 1469015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:18:42.124036 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:18:42.136576 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:18:42.154191 1469015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:42.160428 1469015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:42.160611 1469015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:42.210341 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:18:42.229193 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:18:42.246105 1469015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:18:42.252813 1469015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:18:42.252965 1469015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:18:42.302268 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:18:42.317825 1469015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:18:42.325788 1469015 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 22:18:42.325847 1469015 kubeadm.go:400] StartCluster: {Name:no-preload-975002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:42.325922 1469015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:18:42.325984 1469015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:18:42.371270 1469015 cri.go:89] found id: ""
	I1002 22:18:42.371352 1469015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:18:42.382662 1469015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:18:42.392183 1469015 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:18:42.392247 1469015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:18:42.411308 1469015 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:18:42.411327 1469015 kubeadm.go:157] found existing configuration files:
	
	I1002 22:18:42.411379 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:18:42.423455 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:18:42.423521 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:18:42.447291 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:18:42.462645 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:18:42.462764 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:18:42.482289 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:18:42.502444 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:18:42.502557 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:18:42.520099 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:18:42.532069 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:18:42.532138 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:18:42.541152 1469015 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:18:42.612796 1469015 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:18:42.613219 1469015 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:18:42.647545 1469015 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:18:42.647629 1469015 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:18:42.647681 1469015 kubeadm.go:318] OS: Linux
	I1002 22:18:42.647741 1469015 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:18:42.647801 1469015 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:18:42.647855 1469015 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:18:42.647910 1469015 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:18:42.647967 1469015 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:18:42.648022 1469015 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:18:42.648073 1469015 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:18:42.648128 1469015 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:18:42.648181 1469015 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:18:42.742836 1469015 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:18:42.742956 1469015 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:18:42.743057 1469015 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:18:42.766454 1469015 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:18:42.772327 1469015 out.go:252]   - Generating certificates and keys ...
	I1002 22:18:42.772429 1469015 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:18:42.772510 1469015 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:18:43.010659 1469015 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 22:18:47.782533 1471394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.303615936s)
	I1002 22:18:47.782597 1471394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.27061241s)
	I1002 22:18:47.782928 1471394 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.21360386s)
	I1002 22:18:47.782957 1471394 node_ready.go:35] waiting up to 6m0s for node "embed-certs-080134" to be "Ready" ...
	I1002 22:18:47.783211 1471394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.588348325s)
	I1002 22:18:47.786416 1471394 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-080134 addons enable metrics-server
	
	I1002 22:18:47.810901 1471394 node_ready.go:49] node "embed-certs-080134" is "Ready"
	I1002 22:18:47.810980 1471394 node_ready.go:38] duration metric: took 28.010163ms for node "embed-certs-080134" to be "Ready" ...
	I1002 22:18:47.811008 1471394 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:18:47.811097 1471394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:18:47.829126 1471394 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1002 22:18:44.741285 1469015 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 22:18:45.042415 1469015 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 22:18:45.580611 1469015 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 22:18:45.990281 1469015 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 22:18:45.991552 1469015 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-975002] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:18:46.331914 1469015 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 22:18:46.334626 1469015 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-975002] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:18:46.859365 1469015 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 22:18:46.950416 1469015 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 22:18:47.286760 1469015 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:18:47.287351 1469015 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:18:47.922882 1469015 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:18:48.297144 1469015 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:18:48.621705 1469015 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:18:48.663782 1469015 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:18:48.920879 1469015 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:18:48.921552 1469015 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:18:48.924242 1469015 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:18:47.830679 1471394 api_server.go:72] duration metric: took 10.867713333s to wait for apiserver process to appear ...
	I1002 22:18:47.830745 1471394 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:18:47.830784 1471394 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:18:47.832587 1471394 addons.go:514] duration metric: took 10.869132458s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1002 22:18:47.840925 1471394 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:18:47.842205 1471394 api_server.go:141] control plane version: v1.34.1
	I1002 22:18:47.842226 1471394 api_server.go:131] duration metric: took 11.462399ms to wait for apiserver health ...
	I1002 22:18:47.842235 1471394 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:18:47.846620 1471394 system_pods.go:59] 8 kube-system pods found
	I1002 22:18:47.846652 1471394 system_pods.go:61] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:18:47.846662 1471394 system_pods.go:61] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:18:47.846668 1471394 system_pods.go:61] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:18:47.846676 1471394 system_pods.go:61] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:18:47.846683 1471394 system_pods.go:61] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:18:47.846687 1471394 system_pods.go:61] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:18:47.846694 1471394 system_pods.go:61] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:18:47.846698 1471394 system_pods.go:61] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Running
	I1002 22:18:47.846703 1471394 system_pods.go:74] duration metric: took 4.462467ms to wait for pod list to return data ...
	I1002 22:18:47.846711 1471394 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:18:47.853249 1471394 default_sa.go:45] found service account: "default"
	I1002 22:18:47.853325 1471394 default_sa.go:55] duration metric: took 6.606684ms for default service account to be created ...
	I1002 22:18:47.853348 1471394 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:18:47.863524 1471394 system_pods.go:86] 8 kube-system pods found
	I1002 22:18:47.863562 1471394 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:18:47.863572 1471394 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:18:47.863578 1471394 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:18:47.863586 1471394 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:18:47.863592 1471394 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:18:47.863600 1471394 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:18:47.863607 1471394 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:18:47.863611 1471394 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Running
	I1002 22:18:47.863621 1471394 system_pods.go:126] duration metric: took 10.255127ms to wait for k8s-apps to be running ...
	I1002 22:18:47.863635 1471394 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:18:47.863689 1471394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:18:47.880954 1471394 system_svc.go:56] duration metric: took 17.310589ms WaitForService to wait for kubelet
	I1002 22:18:47.880983 1471394 kubeadm.go:586] duration metric: took 10.918021245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:18:47.881002 1471394 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:18:47.884495 1471394 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:18:47.884528 1471394 node_conditions.go:123] node cpu capacity is 2
	I1002 22:18:47.884541 1471394 node_conditions.go:105] duration metric: took 3.532088ms to run NodePressure ...
	I1002 22:18:47.884553 1471394 start.go:241] waiting for startup goroutines ...
	I1002 22:18:47.884565 1471394 start.go:246] waiting for cluster config update ...
	I1002 22:18:47.884583 1471394 start.go:255] writing updated cluster config ...
	I1002 22:18:47.884874 1471394 ssh_runner.go:195] Run: rm -f paused
	I1002 22:18:47.888950 1471394 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:18:47.892876 1471394 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 22:18:49.899735 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:18:48.927837 1469015 out.go:252]   - Booting up control plane ...
	I1002 22:18:48.927941 1469015 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:18:48.928022 1469015 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:18:48.928092 1469015 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:18:48.965877 1469015 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:18:48.965994 1469015 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:18:48.977967 1469015 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:18:48.980386 1469015 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:18:48.980738 1469015 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:18:49.132675 1469015 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:18:49.132802 1469015 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:18:50.138969 1469015 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001293211s
	I1002 22:18:50.139082 1469015 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:18:50.139168 1469015 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 22:18:50.139262 1469015 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:18:50.139344 1469015 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1002 22:18:52.398467 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:18:54.403622 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:18:56.898368 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:18:57.339828 1469015 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 7.202235828s
	I1002 22:18:57.929769 1469015 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.792573735s
	I1002 22:18:59.139750 1469015 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.002386073s
	I1002 22:18:59.165026 1469015 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 22:18:59.183011 1469015 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 22:18:59.201173 1469015 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 22:18:59.201382 1469015 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-975002 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 22:18:59.219148 1469015 kubeadm.go:318] [bootstrap-token] Using token: hf2oiw.qzyeh524x9w4di8u
	I1002 22:18:59.222617 1469015 out.go:252]   - Configuring RBAC rules ...
	I1002 22:18:59.222757 1469015 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 22:18:59.229488 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 22:18:59.241677 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 22:18:59.248084 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 22:18:59.253811 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 22:18:59.259740 1469015 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 22:18:59.550870 1469015 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 22:19:00.226768 1469015 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 22:19:00.552936 1469015 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 22:19:00.558581 1469015 kubeadm.go:318] 
	I1002 22:19:00.558664 1469015 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 22:19:00.558671 1469015 kubeadm.go:318] 
	I1002 22:19:00.558753 1469015 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 22:19:00.558758 1469015 kubeadm.go:318] 
	I1002 22:19:00.558785 1469015 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 22:19:00.558847 1469015 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 22:19:00.558901 1469015 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 22:19:00.558905 1469015 kubeadm.go:318] 
	I1002 22:19:00.558962 1469015 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 22:19:00.558967 1469015 kubeadm.go:318] 
	I1002 22:19:00.559017 1469015 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 22:19:00.559023 1469015 kubeadm.go:318] 
	I1002 22:19:00.559078 1469015 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 22:19:00.559178 1469015 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 22:19:00.559252 1469015 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 22:19:00.559257 1469015 kubeadm.go:318] 
	I1002 22:19:00.559349 1469015 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 22:19:00.559430 1469015 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 22:19:00.559435 1469015 kubeadm.go:318] 
	I1002 22:19:00.559524 1469015 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token hf2oiw.qzyeh524x9w4di8u \
	I1002 22:19:00.559635 1469015 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 22:19:00.559656 1469015 kubeadm.go:318] 	--control-plane 
	I1002 22:19:00.559661 1469015 kubeadm.go:318] 
	I1002 22:19:00.559761 1469015 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 22:19:00.559766 1469015 kubeadm.go:318] 
	I1002 22:19:00.559852 1469015 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token hf2oiw.qzyeh524x9w4di8u \
	I1002 22:19:00.559959 1469015 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 22:19:00.562829 1469015 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:19:00.563068 1469015 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:19:00.563179 1469015 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:19:00.563275 1469015 cni.go:84] Creating CNI manager for ""
	I1002 22:19:00.563298 1469015 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:19:00.566853 1469015 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1002 22:18:58.900099 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:00.902930 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:19:00.569902 1469015 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:19:00.588293 1469015 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 22:19:00.588319 1469015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 22:19:00.638521 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:19:01.293210 1469015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:19:01.293291 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:01.293358 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-975002 minikube.k8s.io/updated_at=2025_10_02T22_19_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=no-preload-975002 minikube.k8s.io/primary=true
	I1002 22:19:01.602415 1469015 ops.go:34] apiserver oom_adj: -16
	I1002 22:19:01.602540 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:02.102816 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:02.603363 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:03.102729 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:03.602953 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:04.102788 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:04.603260 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:05.102657 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:05.269135 1469015 kubeadm.go:1113] duration metric: took 3.97589838s to wait for elevateKubeSystemPrivileges
	I1002 22:19:05.269177 1469015 kubeadm.go:402] duration metric: took 22.943334557s to StartCluster
	I1002 22:19:05.269206 1469015 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:05.269291 1469015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:19:05.271437 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:05.271828 1469015 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:19:05.272190 1469015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:19:05.272370 1469015 config.go:182] Loaded profile config "no-preload-975002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:19:05.272387 1469015 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:19:05.272511 1469015 addons.go:69] Setting storage-provisioner=true in profile "no-preload-975002"
	I1002 22:19:05.272534 1469015 addons.go:238] Setting addon storage-provisioner=true in "no-preload-975002"
	I1002 22:19:05.272572 1469015 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:19:05.272672 1469015 addons.go:69] Setting default-storageclass=true in profile "no-preload-975002"
	I1002 22:19:05.272684 1469015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-975002"
	I1002 22:19:05.273163 1469015 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:05.273818 1469015 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:05.275562 1469015 out.go:179] * Verifying Kubernetes components...
	I1002 22:19:05.278736 1469015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:19:05.328979 1469015 addons.go:238] Setting addon default-storageclass=true in "no-preload-975002"
	I1002 22:19:05.329094 1469015 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:19:05.329752 1469015 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:05.335568 1469015 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:19:05.338629 1469015 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:19:05.338666 1469015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:19:05.338754 1469015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:05.386885 1469015 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:19:05.386920 1469015 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:19:05.387019 1469015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:05.427655 1469015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34576 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:05.431370 1469015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34576 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:05.903804 1469015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:19:05.908412 1469015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:19:05.908596 1469015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 22:19:05.928210 1469015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:19:06.745502 1469015 node_ready.go:35] waiting up to 6m0s for node "no-preload-975002" to be "Ready" ...
	I1002 22:19:06.745830 1469015 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 22:19:06.806136 1469015 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1002 22:19:03.399969 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:05.423860 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:19:06.809121 1469015 addons.go:514] duration metric: took 1.53671385s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 22:19:07.256252 1469015 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-975002" context rescaled to 1 replicas
	W1002 22:19:07.899654 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:10.398010 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:08.748541 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:10.749138 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:12.749360 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:12.398627 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:14.898925 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:15.248829 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:17.749454 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:17.399980 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:19.899037 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:19:20.904548 1471394 pod_ready.go:94] pod "coredns-66bc5c9577-n47rb" is "Ready"
	I1002 22:19:20.904576 1471394 pod_ready.go:86] duration metric: took 33.011675469s for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.907496 1471394 pod_ready.go:83] waiting for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.911938 1471394 pod_ready.go:94] pod "etcd-embed-certs-080134" is "Ready"
	I1002 22:19:20.911972 1471394 pod_ready.go:86] duration metric: took 4.45048ms for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.914304 1471394 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.918893 1471394 pod_ready.go:94] pod "kube-apiserver-embed-certs-080134" is "Ready"
	I1002 22:19:20.918920 1471394 pod_ready.go:86] duration metric: took 4.591261ms for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.921085 1471394 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.097644 1471394 pod_ready.go:94] pod "kube-controller-manager-embed-certs-080134" is "Ready"
	I1002 22:19:21.097671 1471394 pod_ready.go:86] duration metric: took 176.560085ms for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.297761 1471394 pod_ready.go:83] waiting for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.696824 1471394 pod_ready.go:94] pod "kube-proxy-7lq28" is "Ready"
	I1002 22:19:21.696855 1471394 pod_ready.go:86] duration metric: took 399.06335ms for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.897330 1471394 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.296795 1471394 pod_ready.go:94] pod "kube-scheduler-embed-certs-080134" is "Ready"
	I1002 22:19:22.296828 1471394 pod_ready.go:86] duration metric: took 399.468273ms for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.296841 1471394 pod_ready.go:40] duration metric: took 34.407859012s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:19:22.362673 1471394 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:19:22.365724 1471394 out.go:179] * Done! kubectl is now configured to use "embed-certs-080134" cluster and "default" namespace by default
	I1002 22:19:20.248320 1469015 node_ready.go:49] node "no-preload-975002" is "Ready"
	I1002 22:19:20.248352 1469015 node_ready.go:38] duration metric: took 13.502816799s for node "no-preload-975002" to be "Ready" ...
	I1002 22:19:20.248367 1469015 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:19:20.248430 1469015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:19:20.261347 1469015 api_server.go:72] duration metric: took 14.989475418s to wait for apiserver process to appear ...
	I1002 22:19:20.261372 1469015 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:19:20.261391 1469015 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:19:20.269713 1469015 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 22:19:20.271031 1469015 api_server.go:141] control plane version: v1.34.1
	I1002 22:19:20.271053 1469015 api_server.go:131] duration metric: took 9.67464ms to wait for apiserver health ...
	I1002 22:19:20.271062 1469015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:19:20.274608 1469015 system_pods.go:59] 8 kube-system pods found
	I1002 22:19:20.274640 1469015 system_pods.go:61] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.274646 1469015 system_pods.go:61] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.274654 1469015 system_pods.go:61] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.274660 1469015 system_pods.go:61] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.274665 1469015 system_pods.go:61] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.274670 1469015 system_pods.go:61] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.274676 1469015 system_pods.go:61] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.274683 1469015 system_pods.go:61] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:19:20.274687 1469015 system_pods.go:74] duration metric: took 3.620185ms to wait for pod list to return data ...
	I1002 22:19:20.274695 1469015 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:19:20.280725 1469015 default_sa.go:45] found service account: "default"
	I1002 22:19:20.280751 1469015 default_sa.go:55] duration metric: took 6.050599ms for default service account to be created ...
	I1002 22:19:20.280761 1469015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:19:20.283528 1469015 system_pods.go:86] 8 kube-system pods found
	I1002 22:19:20.283561 1469015 system_pods.go:89] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.283568 1469015 system_pods.go:89] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.283574 1469015 system_pods.go:89] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.283579 1469015 system_pods.go:89] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.283584 1469015 system_pods.go:89] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.283588 1469015 system_pods.go:89] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.283592 1469015 system_pods.go:89] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.283598 1469015 system_pods.go:89] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:19:20.283619 1469015 retry.go:31] will retry after 285.800121ms: missing components: kube-dns
	I1002 22:19:20.580010 1469015 system_pods.go:86] 8 kube-system pods found
	I1002 22:19:20.580047 1469015 system_pods.go:89] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.580054 1469015 system_pods.go:89] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.580063 1469015 system_pods.go:89] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.580067 1469015 system_pods.go:89] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.580072 1469015 system_pods.go:89] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.580077 1469015 system_pods.go:89] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.580081 1469015 system_pods.go:89] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.580091 1469015 system_pods.go:89] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:19:20.580106 1469015 retry.go:31] will retry after 343.665312ms: missing components: kube-dns
	I1002 22:19:20.933271 1469015 system_pods.go:86] 8 kube-system pods found
	I1002 22:19:20.933307 1469015 system_pods.go:89] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.933315 1469015 system_pods.go:89] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.933321 1469015 system_pods.go:89] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.933325 1469015 system_pods.go:89] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.933330 1469015 system_pods.go:89] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.933334 1469015 system_pods.go:89] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.933340 1469015 system_pods.go:89] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.933344 1469015 system_pods.go:89] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Running
	I1002 22:19:20.933352 1469015 system_pods.go:126] duration metric: took 652.584288ms to wait for k8s-apps to be running ...
	I1002 22:19:20.933360 1469015 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:19:20.933419 1469015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:19:20.954574 1469015 system_svc.go:56] duration metric: took 21.205386ms WaitForService to wait for kubelet
	I1002 22:19:20.954602 1469015 kubeadm.go:586] duration metric: took 15.682736643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:19:20.954621 1469015 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:19:20.969765 1469015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:19:20.969799 1469015 node_conditions.go:123] node cpu capacity is 2
	I1002 22:19:20.969819 1469015 node_conditions.go:105] duration metric: took 15.185958ms to run NodePressure ...
	I1002 22:19:20.969836 1469015 start.go:241] waiting for startup goroutines ...
	I1002 22:19:20.969847 1469015 start.go:246] waiting for cluster config update ...
	I1002 22:19:20.969862 1469015 start.go:255] writing updated cluster config ...
	I1002 22:19:20.970232 1469015 ssh_runner.go:195] Run: rm -f paused
	I1002 22:19:20.977402 1469015 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:19:20.983646 1469015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rj4bn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.989548 1469015 pod_ready.go:94] pod "coredns-66bc5c9577-rj4bn" is "Ready"
	I1002 22:19:21.989572 1469015 pod_ready.go:86] duration metric: took 1.005900106s for pod "coredns-66bc5c9577-rj4bn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.992406 1469015 pod_ready.go:83] waiting for pod "etcd-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.997134 1469015 pod_ready.go:94] pod "etcd-no-preload-975002" is "Ready"
	I1002 22:19:21.997213 1469015 pod_ready.go:86] duration metric: took 4.778466ms for pod "etcd-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.999604 1469015 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.008465 1469015 pod_ready.go:94] pod "kube-apiserver-no-preload-975002" is "Ready"
	I1002 22:19:22.008496 1469015 pod_ready.go:86] duration metric: took 8.861043ms for pod "kube-apiserver-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.011576 1469015 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.187884 1469015 pod_ready.go:94] pod "kube-controller-manager-no-preload-975002" is "Ready"
	I1002 22:19:22.187912 1469015 pod_ready.go:86] duration metric: took 176.308323ms for pod "kube-controller-manager-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.393864 1469015 pod_ready.go:83] waiting for pod "kube-proxy-lzzt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.787934 1469015 pod_ready.go:94] pod "kube-proxy-lzzt4" is "Ready"
	I1002 22:19:22.787962 1469015 pod_ready.go:86] duration metric: took 394.069322ms for pod "kube-proxy-lzzt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.988369 1469015 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:23.387713 1469015 pod_ready.go:94] pod "kube-scheduler-no-preload-975002" is "Ready"
	I1002 22:19:23.387749 1469015 pod_ready.go:86] duration metric: took 399.351959ms for pod "kube-scheduler-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:23.387762 1469015 pod_ready.go:40] duration metric: took 2.410324146s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:19:23.436907 1469015 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:19:23.440526 1469015 out.go:179] * Done! kubectl is now configured to use "no-preload-975002" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 22:19:20 no-preload-975002 crio[837]: time="2025-10-02T22:19:20.570550638Z" level=info msg="Created container 46c00ee6410f0a26928254e012bb74bf0aad45288ba7c5f11373012db0304637: kube-system/coredns-66bc5c9577-rj4bn/coredns" id=25b589d2-a604-4f6d-a625-db1d06ef729d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:19:20 no-preload-975002 crio[837]: time="2025-10-02T22:19:20.571498592Z" level=info msg="Starting container: 46c00ee6410f0a26928254e012bb74bf0aad45288ba7c5f11373012db0304637" id=7064121a-c1f9-4e64-91d6-fa200177e6cd name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:19:20 no-preload-975002 crio[837]: time="2025-10-02T22:19:20.573824163Z" level=info msg="Started container" PID=2491 containerID=46c00ee6410f0a26928254e012bb74bf0aad45288ba7c5f11373012db0304637 description=kube-system/coredns-66bc5c9577-rj4bn/coredns id=7064121a-c1f9-4e64-91d6-fa200177e6cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=66986395373ad1fd365a1a8509ee768a7dccb6cd3fb80d624ed2b1fa5ccab74e
	Oct 02 22:19:23 no-preload-975002 crio[837]: time="2025-10-02T22:19:23.982266252Z" level=info msg="Running pod sandbox: default/busybox/POD" id=52b36c6b-9ab4-436f-86f6-44c5c6dc543a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:19:23 no-preload-975002 crio[837]: time="2025-10-02T22:19:23.982421006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:19:23 no-preload-975002 crio[837]: time="2025-10-02T22:19:23.988124148Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:336a6db00c2c412e078549e4e64c686c92b96a2a39f4d1bc7f8c02a66710cf43 UID:812396e0-ab4d-4b5a-9a04-769a24c6ecc1 NetNS:/var/run/netns/5ec2522a-90c2-42f0-b556-2c5763f446e7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40013ae710}] Aliases:map[]}"
	Oct 02 22:19:23 no-preload-975002 crio[837]: time="2025-10-02T22:19:23.988301113Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 02 22:19:23 no-preload-975002 crio[837]: time="2025-10-02T22:19:23.997413474Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:336a6db00c2c412e078549e4e64c686c92b96a2a39f4d1bc7f8c02a66710cf43 UID:812396e0-ab4d-4b5a-9a04-769a24c6ecc1 NetNS:/var/run/netns/5ec2522a-90c2-42f0-b556-2c5763f446e7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40013ae710}] Aliases:map[]}"
	Oct 02 22:19:23 no-preload-975002 crio[837]: time="2025-10-02T22:19:23.997567465Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 02 22:19:24 no-preload-975002 crio[837]: time="2025-10-02T22:19:24.003632636Z" level=info msg="Ran pod sandbox 336a6db00c2c412e078549e4e64c686c92b96a2a39f4d1bc7f8c02a66710cf43 with infra container: default/busybox/POD" id=52b36c6b-9ab4-436f-86f6-44c5c6dc543a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:19:24 no-preload-975002 crio[837]: time="2025-10-02T22:19:24.006842712Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a7ce2023-6a55-4d3d-9789-969fb279ac2d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:19:24 no-preload-975002 crio[837]: time="2025-10-02T22:19:24.00700435Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a7ce2023-6a55-4d3d-9789-969fb279ac2d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:19:24 no-preload-975002 crio[837]: time="2025-10-02T22:19:24.007050248Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a7ce2023-6a55-4d3d-9789-969fb279ac2d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:19:24 no-preload-975002 crio[837]: time="2025-10-02T22:19:24.009891912Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5ea8a3b1-9f97-4f78-875c-4ec45954885d name=/runtime.v1.ImageService/PullImage
	Oct 02 22:19:24 no-preload-975002 crio[837]: time="2025-10-02T22:19:24.013490805Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 02 22:19:26 no-preload-975002 crio[837]: time="2025-10-02T22:19:26.080059293Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5ea8a3b1-9f97-4f78-875c-4ec45954885d name=/runtime.v1.ImageService/PullImage
	Oct 02 22:19:26 no-preload-975002 crio[837]: time="2025-10-02T22:19:26.080701284Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0299ae1d-42a1-4269-bacc-5810eeac8588 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:19:26 no-preload-975002 crio[837]: time="2025-10-02T22:19:26.08261087Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=09559471-be7b-4ff7-a670-3321a64f2f12 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:19:26 no-preload-975002 crio[837]: time="2025-10-02T22:19:26.088556945Z" level=info msg="Creating container: default/busybox/busybox" id=7dd141ad-62e7-4bbb-8a32-d0ee4fd0b88f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:19:26 no-preload-975002 crio[837]: time="2025-10-02T22:19:26.089394567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:19:26 no-preload-975002 crio[837]: time="2025-10-02T22:19:26.094368065Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:19:26 no-preload-975002 crio[837]: time="2025-10-02T22:19:26.094902546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:19:26 no-preload-975002 crio[837]: time="2025-10-02T22:19:26.112315574Z" level=info msg="Created container 0865eec454e7ffc55272202ba0251216870e204504728edbb89fa7f23e14672c: default/busybox/busybox" id=7dd141ad-62e7-4bbb-8a32-d0ee4fd0b88f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:19:26 no-preload-975002 crio[837]: time="2025-10-02T22:19:26.113025395Z" level=info msg="Starting container: 0865eec454e7ffc55272202ba0251216870e204504728edbb89fa7f23e14672c" id=e0683ca5-397d-4da7-9faa-b5b910881347 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:19:26 no-preload-975002 crio[837]: time="2025-10-02T22:19:26.115799179Z" level=info msg="Started container" PID=2543 containerID=0865eec454e7ffc55272202ba0251216870e204504728edbb89fa7f23e14672c description=default/busybox/busybox id=e0683ca5-397d-4da7-9faa-b5b910881347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=336a6db00c2c412e078549e4e64c686c92b96a2a39f4d1bc7f8c02a66710cf43
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0865eec454e7f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   336a6db00c2c4       busybox                                     default
	46c00ee6410f0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   66986395373ad       coredns-66bc5c9577-rj4bn                    kube-system
	7e2480c18b069       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   0ae31cabafa80       storage-provisioner                         kube-system
	f4192dce8c1ba       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   1282ecd3ac456       kindnet-hpq6g                               kube-system
	bd8259891e287       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      27 seconds ago      Running             kube-proxy                0                   5bdb6e42706a1       kube-proxy-lzzt4                            kube-system
	0f1bdc5cedb70       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   3c66b21a94ca7       kube-scheduler-no-preload-975002            kube-system
	74bb8a402aca1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   732f142be1806       kube-controller-manager-no-preload-975002   kube-system
	59ae919de3c69       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   d8e3fbc69b0cb       etcd-no-preload-975002                      kube-system
	04c364e2be71e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   9093f4ba3eaec       kube-apiserver-no-preload-975002            kube-system
	
	
	==> coredns [46c00ee6410f0a26928254e012bb74bf0aad45288ba7c5f11373012db0304637] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39540 - 19456 "HINFO IN 3003397167387330982.6513642452530476290. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016882953s
	
	
	==> describe nodes <==
	Name:               no-preload-975002
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-975002
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=no-preload-975002
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_19_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:18:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-975002
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:19:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:19:31 +0000   Thu, 02 Oct 2025 22:18:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:19:31 +0000   Thu, 02 Oct 2025 22:18:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:19:31 +0000   Thu, 02 Oct 2025 22:18:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:19:31 +0000   Thu, 02 Oct 2025 22:19:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-975002
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3e17a548f2f48dcb4aeba937d3b6269
	  System UUID:                c00f53d9-fad4-4c59-816a-d3b3d9ec8fa6
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-rj4bn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-975002                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-hpq6g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-975002             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-no-preload-975002    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-lzzt4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-975002             100m (5%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-975002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-975002 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node no-preload-975002 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node no-preload-975002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node no-preload-975002 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node no-preload-975002 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-975002 event: Registered Node no-preload-975002 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-975002 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:18] overlayfs: idmapped layers are currently not supported
	[ +12.782696] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [59ae919de3c693bd8f86ea4bc3098299315f328145edf337ce88734013f92118] <==
	{"level":"warn","ts":"2025-10-02T22:18:53.398990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.487893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.534670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.572598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.622157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.649515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.691263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.736741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.775308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.819076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.857586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.890112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.932142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.960975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:53.993782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:54.044309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:54.066441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:54.110518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:54.148087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:54.177103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:54.224933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:54.295276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:54.340573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:54.401149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:54.597498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56394","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:19:34 up  7:01,  0 user,  load average: 4.97, 3.43, 2.51
	Linux no-preload-975002 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4192dce8c1ba427527f0792be63e89d55fd267054ac3b82b9e2793f3af41cf1] <==
	I1002 22:19:09.409772       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:19:09.409976       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 22:19:09.410132       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:19:09.410152       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:19:09.410162       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:19:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:19:09.611091       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:19:09.703213       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:19:09.703246       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:19:09.703361       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 22:19:09.903679       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:19:09.903826       1 metrics.go:72] Registering metrics
	I1002 22:19:09.904033       1 controller.go:711] "Syncing nftables rules"
	I1002 22:19:19.617390       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:19:19.617430       1 main.go:301] handling current node
	I1002 22:19:29.612646       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:19:29.612682       1 main.go:301] handling current node
	
	
	==> kube-apiserver [04c364e2be71ee3e8dc67e721c9ac7afc7d441252db85fe902f23b4883f2a38d] <==
	I1002 22:18:57.020770       1 policy_source.go:240] refreshing policies
	I1002 22:18:57.027498       1 controller.go:667] quota admission added evaluator for: namespaces
	E1002 22:18:57.041083       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1002 22:18:57.053217       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:18:57.058396       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 22:18:57.127535       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 22:18:57.131955       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:18:57.132394       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:18:57.321188       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:18:57.352519       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 22:18:57.352542       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:18:58.805393       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:18:58.878726       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:18:59.080279       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 22:18:59.089571       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1002 22:18:59.091242       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 22:18:59.101679       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 22:18:59.395882       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:19:00.163218       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:19:00.212721       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 22:19:00.252804       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 22:19:05.231020       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1002 22:19:05.480926       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 22:19:05.781453       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:19:05.876602       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [74bb8a402aca140411daa8a05c859cd8aa43f298e316d97aafb668fb9361cde2] <==
	I1002 22:19:04.434740       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 22:19:04.434829       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 22:19:04.434870       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 22:19:04.434883       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 22:19:04.434889       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 22:19:04.439342       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 22:19:04.452639       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 22:19:04.453113       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 22:19:04.466163       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 22:19:04.467699       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-975002" podCIDRs=["10.244.0.0/24"]
	I1002 22:19:04.469506       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:19:04.469521       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:19:04.469528       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 22:19:04.472420       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:19:04.472622       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 22:19:04.472815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 22:19:04.474156       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:19:04.477977       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 22:19:04.479577       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 22:19:04.486250       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:19:04.486422       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 22:19:04.489991       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:19:04.490091       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:19:04.499297       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:19:24.427797       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bd8259891e2870fdda8d408e81c3db34312e999be8699fdef70cec208c639607] <==
	I1002 22:19:06.925681       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:19:07.031771       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:19:07.132467       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:19:07.132509       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 22:19:07.132656       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:19:07.184735       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:19:07.184793       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:19:07.190369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:19:07.190682       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:19:07.190704       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:19:07.191866       1 config.go:200] "Starting service config controller"
	I1002 22:19:07.191884       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:19:07.199878       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:19:07.199900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:19:07.199919       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:19:07.199924       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:19:07.209939       1 config.go:309] "Starting node config controller"
	I1002 22:19:07.210024       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:19:07.210173       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:19:07.292185       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:19:07.300416       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:19:07.300454       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0f1bdc5cedb70bf4972b5fa363e9c05e28a9e26ed4c8d16d1cb671d9e06f57f9] <==
	I1002 22:18:57.853360       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:18:57.854533       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:18:57.854969       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:18:57.859558       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 22:18:57.872605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 22:18:57.881274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 22:18:57.881348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 22:18:57.881383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 22:18:57.881438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 22:18:57.889543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 22:18:57.889610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 22:18:57.889681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 22:18:57.889728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 22:18:57.889771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 22:18:57.889880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 22:18:57.889976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 22:18:57.890057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 22:18:57.890106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 22:18:57.890168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 22:18:57.905451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 22:18:57.905525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 22:18:57.905562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 22:18:57.913377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 22:18:58.773169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 22:19:01.856181       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:19:04 no-preload-975002 kubelet[2000]: I1002 22:19:04.555129    2000 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 22:19:05 no-preload-975002 kubelet[2000]: I1002 22:19:05.464264    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a626a55d-103d-42f8-8f72-d72089831cc7-cni-cfg\") pod \"kindnet-hpq6g\" (UID: \"a626a55d-103d-42f8-8f72-d72089831cc7\") " pod="kube-system/kindnet-hpq6g"
	Oct 02 22:19:05 no-preload-975002 kubelet[2000]: I1002 22:19:05.464322    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a626a55d-103d-42f8-8f72-d72089831cc7-lib-modules\") pod \"kindnet-hpq6g\" (UID: \"a626a55d-103d-42f8-8f72-d72089831cc7\") " pod="kube-system/kindnet-hpq6g"
	Oct 02 22:19:05 no-preload-975002 kubelet[2000]: I1002 22:19:05.464349    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a626a55d-103d-42f8-8f72-d72089831cc7-xtables-lock\") pod \"kindnet-hpq6g\" (UID: \"a626a55d-103d-42f8-8f72-d72089831cc7\") " pod="kube-system/kindnet-hpq6g"
	Oct 02 22:19:05 no-preload-975002 kubelet[2000]: I1002 22:19:05.464377    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qklkg\" (UniqueName: \"kubernetes.io/projected/a626a55d-103d-42f8-8f72-d72089831cc7-kube-api-access-qklkg\") pod \"kindnet-hpq6g\" (UID: \"a626a55d-103d-42f8-8f72-d72089831cc7\") " pod="kube-system/kindnet-hpq6g"
	Oct 02 22:19:05 no-preload-975002 kubelet[2000]: E1002 22:19:05.464938    2000 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-975002\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-975002' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 02 22:19:05 no-preload-975002 kubelet[2000]: I1002 22:19:05.564824    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p284q\" (UniqueName: \"kubernetes.io/projected/2990596b-be54-41a2-a537-a97040189e3f-kube-api-access-p284q\") pod \"kube-proxy-lzzt4\" (UID: \"2990596b-be54-41a2-a537-a97040189e3f\") " pod="kube-system/kube-proxy-lzzt4"
	Oct 02 22:19:05 no-preload-975002 kubelet[2000]: I1002 22:19:05.564890    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2990596b-be54-41a2-a537-a97040189e3f-lib-modules\") pod \"kube-proxy-lzzt4\" (UID: \"2990596b-be54-41a2-a537-a97040189e3f\") " pod="kube-system/kube-proxy-lzzt4"
	Oct 02 22:19:05 no-preload-975002 kubelet[2000]: I1002 22:19:05.564929    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2990596b-be54-41a2-a537-a97040189e3f-kube-proxy\") pod \"kube-proxy-lzzt4\" (UID: \"2990596b-be54-41a2-a537-a97040189e3f\") " pod="kube-system/kube-proxy-lzzt4"
	Oct 02 22:19:05 no-preload-975002 kubelet[2000]: I1002 22:19:05.564957    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2990596b-be54-41a2-a537-a97040189e3f-xtables-lock\") pod \"kube-proxy-lzzt4\" (UID: \"2990596b-be54-41a2-a537-a97040189e3f\") " pod="kube-system/kube-proxy-lzzt4"
	Oct 02 22:19:05 no-preload-975002 kubelet[2000]: I1002 22:19:05.772176    2000 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 22:19:06 no-preload-975002 kubelet[2000]: W1002 22:19:06.019853    2000 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/crio-1282ecd3ac456d8c4465c7e4d68402eebca60c3336fc2e8507471ab79de7d1ba WatchSource:0}: Error finding container 1282ecd3ac456d8c4465c7e4d68402eebca60c3336fc2e8507471ab79de7d1ba: Status 404 returned error can't find the container with id 1282ecd3ac456d8c4465c7e4d68402eebca60c3336fc2e8507471ab79de7d1ba
	Oct 02 22:19:06 no-preload-975002 kubelet[2000]: W1002 22:19:06.652865    2000 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/crio-5bdb6e42706a183b7ec2951f97bda609c72e8689fb3ec7e8801424ed60f9bb53 WatchSource:0}: Error finding container 5bdb6e42706a183b7ec2951f97bda609c72e8689fb3ec7e8801424ed60f9bb53: Status 404 returned error can't find the container with id 5bdb6e42706a183b7ec2951f97bda609c72e8689fb3ec7e8801424ed60f9bb53
	Oct 02 22:19:09 no-preload-975002 kubelet[2000]: I1002 22:19:09.780939    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzzt4" podStartSLOduration=4.7809130159999995 podStartE2EDuration="4.780913016s" podCreationTimestamp="2025-10-02 22:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:19:07.779562415 +0000 UTC m=+7.724834590" watchObservedRunningTime="2025-10-02 22:19:09.780913016 +0000 UTC m=+9.726185174"
	Oct 02 22:19:10 no-preload-975002 kubelet[2000]: I1002 22:19:10.304211    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hpq6g" podStartSLOduration=1.996539595 podStartE2EDuration="5.304187384s" podCreationTimestamp="2025-10-02 22:19:05 +0000 UTC" firstStartedPulling="2025-10-02 22:19:06.024405197 +0000 UTC m=+5.969677356" lastFinishedPulling="2025-10-02 22:19:09.332052978 +0000 UTC m=+9.277325145" observedRunningTime="2025-10-02 22:19:09.781490006 +0000 UTC m=+9.726762190" watchObservedRunningTime="2025-10-02 22:19:10.304187384 +0000 UTC m=+10.249459543"
	Oct 02 22:19:20 no-preload-975002 kubelet[2000]: I1002 22:19:20.133861    2000 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 02 22:19:20 no-preload-975002 kubelet[2000]: I1002 22:19:20.283789    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a57d12e-2a90-4806-b64a-433cef84fcb9-config-volume\") pod \"coredns-66bc5c9577-rj4bn\" (UID: \"1a57d12e-2a90-4806-b64a-433cef84fcb9\") " pod="kube-system/coredns-66bc5c9577-rj4bn"
	Oct 02 22:19:20 no-preload-975002 kubelet[2000]: I1002 22:19:20.283841    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjnm\" (UniqueName: \"kubernetes.io/projected/1a57d12e-2a90-4806-b64a-433cef84fcb9-kube-api-access-cbjnm\") pod \"coredns-66bc5c9577-rj4bn\" (UID: \"1a57d12e-2a90-4806-b64a-433cef84fcb9\") " pod="kube-system/coredns-66bc5c9577-rj4bn"
	Oct 02 22:19:20 no-preload-975002 kubelet[2000]: I1002 22:19:20.283870    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d5c6e3b7-bdd2-4497-aa9c-da91bed71489-tmp\") pod \"storage-provisioner\" (UID: \"d5c6e3b7-bdd2-4497-aa9c-da91bed71489\") " pod="kube-system/storage-provisioner"
	Oct 02 22:19:20 no-preload-975002 kubelet[2000]: I1002 22:19:20.283892    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndbm\" (UniqueName: \"kubernetes.io/projected/d5c6e3b7-bdd2-4497-aa9c-da91bed71489-kube-api-access-qndbm\") pod \"storage-provisioner\" (UID: \"d5c6e3b7-bdd2-4497-aa9c-da91bed71489\") " pod="kube-system/storage-provisioner"
	Oct 02 22:19:20 no-preload-975002 kubelet[2000]: W1002 22:19:20.517884    2000 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/crio-66986395373ad1fd365a1a8509ee768a7dccb6cd3fb80d624ed2b1fa5ccab74e WatchSource:0}: Error finding container 66986395373ad1fd365a1a8509ee768a7dccb6cd3fb80d624ed2b1fa5ccab74e: Status 404 returned error can't find the container with id 66986395373ad1fd365a1a8509ee768a7dccb6cd3fb80d624ed2b1fa5ccab74e
	Oct 02 22:19:20 no-preload-975002 kubelet[2000]: I1002 22:19:20.828934    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rj4bn" podStartSLOduration=15.828918053 podStartE2EDuration="15.828918053s" podCreationTimestamp="2025-10-02 22:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:19:20.828207724 +0000 UTC m=+20.773479891" watchObservedRunningTime="2025-10-02 22:19:20.828918053 +0000 UTC m=+20.774190220"
	Oct 02 22:19:21 no-preload-975002 kubelet[2000]: I1002 22:19:21.835911    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.835891255 podStartE2EDuration="15.835891255s" podCreationTimestamp="2025-10-02 22:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:19:20.848371766 +0000 UTC m=+20.793643942" watchObservedRunningTime="2025-10-02 22:19:21.835891255 +0000 UTC m=+21.781163414"
	Oct 02 22:19:23 no-preload-975002 kubelet[2000]: I1002 22:19:23.804440    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7848z\" (UniqueName: \"kubernetes.io/projected/812396e0-ab4d-4b5a-9a04-769a24c6ecc1-kube-api-access-7848z\") pod \"busybox\" (UID: \"812396e0-ab4d-4b5a-9a04-769a24c6ecc1\") " pod="default/busybox"
	Oct 02 22:19:24 no-preload-975002 kubelet[2000]: W1002 22:19:24.003163    2000 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/crio-336a6db00c2c412e078549e4e64c686c92b96a2a39f4d1bc7f8c02a66710cf43 WatchSource:0}: Error finding container 336a6db00c2c412e078549e4e64c686c92b96a2a39f4d1bc7f8c02a66710cf43: Status 404 returned error can't find the container with id 336a6db00c2c412e078549e4e64c686c92b96a2a39f4d1bc7f8c02a66710cf43
	
	
	==> storage-provisioner [7e2480c18b06959376c23ea7577f734c25011968c8e6edfad96131fdd519df64] <==
	I1002 22:19:20.554150       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 22:19:20.583974       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:19:20.585846       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 22:19:20.588373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:20.601026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:19:20.601760       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:19:20.602246       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-975002_b3e82da8-4865-4d32-a39c-122f5e63f54e!
	I1002 22:19:20.604613       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54f1ad02-57ac-43be-8016-6454cc1639da", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-975002_b3e82da8-4865-4d32-a39c-122f5e63f54e became leader
	W1002 22:19:20.613733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:20.621934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:19:20.703145       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-975002_b3e82da8-4865-4d32-a39c-122f5e63f54e!
	W1002 22:19:22.625721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:22.639712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:24.643297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:24.651269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:26.654928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:26.662145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:28.665360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:28.672582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:30.676447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:30.680874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:32.684749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:32.692241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:34.696335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:34.705638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-975002 -n no-preload-975002
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-975002 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-080134 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-080134 --alsologtostderr -v=1: exit status 80 (2.045933568s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-080134 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:19:34.236990 1475065 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:19:34.237201 1475065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:19:34.237229 1475065 out.go:374] Setting ErrFile to fd 2...
	I1002 22:19:34.237247 1475065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:19:34.237591 1475065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:19:34.237908 1475065 out.go:368] Setting JSON to false
	I1002 22:19:34.237960 1475065 mustload.go:65] Loading cluster: embed-certs-080134
	I1002 22:19:34.238504 1475065 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:19:34.239243 1475065 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:19:34.264358 1475065 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:19:34.264724 1475065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:19:34.363703 1475065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:19:34.353518826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:19:34.364400 1475065 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-080134 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 22:19:34.368357 1475065 out.go:179] * Pausing node embed-certs-080134 ... 
	I1002 22:19:34.371247 1475065 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:19:34.371575 1475065 ssh_runner.go:195] Run: systemctl --version
	I1002 22:19:34.371616 1475065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:19:34.396021 1475065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:19:34.497434 1475065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:19:34.536542 1475065 pause.go:51] kubelet running: true
	I1002 22:19:34.536627 1475065 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:19:34.861552 1475065 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:19:34.861641 1475065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:19:34.944773 1475065 cri.go:89] found id: "2e99de6785ee276082cf2ab23e0c125a5ecf20685dbddf201dd67fbad2b0bae0"
	I1002 22:19:34.944792 1475065 cri.go:89] found id: "8de2557fb7f6a4398634b1d07698a2116674409032a382a4eee752ca093ec156"
	I1002 22:19:34.944797 1475065 cri.go:89] found id: "0a48ed5b3fbf47202add398ac63a0933859619e11b2d3a7a92ec0f84fd39b13d"
	I1002 22:19:34.944801 1475065 cri.go:89] found id: "b37b9a0b7fd291813508411cfc8272652f0c2752c32e03b610e896ac45ffcb46"
	I1002 22:19:34.944804 1475065 cri.go:89] found id: "f94e9eeb4727cf987f9a8b1a30b32e826140c814cd7eaca8aa5c10744f968eaa"
	I1002 22:19:34.944808 1475065 cri.go:89] found id: "31cbedff1fe1e823d96eb4858d03eb2aa72e9a77c0ffc8651909298dfb2f2c47"
	I1002 22:19:34.944811 1475065 cri.go:89] found id: "7a11f9726f5e6292b466176dd31607960e89c89de96d69efda2e47f6d6bf355d"
	I1002 22:19:34.944814 1475065 cri.go:89] found id: "a71c5b7dea391f98f7c0a495ab685f227e17c98ff9866d2a9493764e53e86da5"
	I1002 22:19:34.944817 1475065 cri.go:89] found id: "abb55666df5bda820c6e6516b076744eb1e7ce0ae95bd5d8e416c3dca835aa10"
	I1002 22:19:34.944823 1475065 cri.go:89] found id: "e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86"
	I1002 22:19:34.944826 1475065 cri.go:89] found id: "e25231cdd4c4fd0dd69d5de90f20d75c497d5d57ba39068e59d1bd6e70ac3e8e"
	I1002 22:19:34.944829 1475065 cri.go:89] found id: ""
	I1002 22:19:34.944877 1475065 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:19:34.956352 1475065 retry.go:31] will retry after 272.419324ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:19:34Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:19:35.229888 1475065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:19:35.244873 1475065 pause.go:51] kubelet running: false
	I1002 22:19:35.244951 1475065 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:19:35.474606 1475065 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:19:35.474698 1475065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:19:35.571775 1475065 cri.go:89] found id: "2e99de6785ee276082cf2ab23e0c125a5ecf20685dbddf201dd67fbad2b0bae0"
	I1002 22:19:35.571799 1475065 cri.go:89] found id: "8de2557fb7f6a4398634b1d07698a2116674409032a382a4eee752ca093ec156"
	I1002 22:19:35.571823 1475065 cri.go:89] found id: "0a48ed5b3fbf47202add398ac63a0933859619e11b2d3a7a92ec0f84fd39b13d"
	I1002 22:19:35.571830 1475065 cri.go:89] found id: "b37b9a0b7fd291813508411cfc8272652f0c2752c32e03b610e896ac45ffcb46"
	I1002 22:19:35.571834 1475065 cri.go:89] found id: "f94e9eeb4727cf987f9a8b1a30b32e826140c814cd7eaca8aa5c10744f968eaa"
	I1002 22:19:35.571837 1475065 cri.go:89] found id: "31cbedff1fe1e823d96eb4858d03eb2aa72e9a77c0ffc8651909298dfb2f2c47"
	I1002 22:19:35.571841 1475065 cri.go:89] found id: "7a11f9726f5e6292b466176dd31607960e89c89de96d69efda2e47f6d6bf355d"
	I1002 22:19:35.571845 1475065 cri.go:89] found id: "a71c5b7dea391f98f7c0a495ab685f227e17c98ff9866d2a9493764e53e86da5"
	I1002 22:19:35.571863 1475065 cri.go:89] found id: "abb55666df5bda820c6e6516b076744eb1e7ce0ae95bd5d8e416c3dca835aa10"
	I1002 22:19:35.571873 1475065 cri.go:89] found id: "e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86"
	I1002 22:19:35.571876 1475065 cri.go:89] found id: "e25231cdd4c4fd0dd69d5de90f20d75c497d5d57ba39068e59d1bd6e70ac3e8e"
	I1002 22:19:35.571879 1475065 cri.go:89] found id: ""
	I1002 22:19:35.571937 1475065 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:19:35.598582 1475065 retry.go:31] will retry after 188.606947ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:19:35Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:19:35.787834 1475065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:19:35.802086 1475065 pause.go:51] kubelet running: false
	I1002 22:19:35.802154 1475065 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:19:36.078782 1475065 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:19:36.078866 1475065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:19:36.168592 1475065 cri.go:89] found id: "2e99de6785ee276082cf2ab23e0c125a5ecf20685dbddf201dd67fbad2b0bae0"
	I1002 22:19:36.168613 1475065 cri.go:89] found id: "8de2557fb7f6a4398634b1d07698a2116674409032a382a4eee752ca093ec156"
	I1002 22:19:36.168619 1475065 cri.go:89] found id: "0a48ed5b3fbf47202add398ac63a0933859619e11b2d3a7a92ec0f84fd39b13d"
	I1002 22:19:36.168622 1475065 cri.go:89] found id: "b37b9a0b7fd291813508411cfc8272652f0c2752c32e03b610e896ac45ffcb46"
	I1002 22:19:36.168626 1475065 cri.go:89] found id: "f94e9eeb4727cf987f9a8b1a30b32e826140c814cd7eaca8aa5c10744f968eaa"
	I1002 22:19:36.168630 1475065 cri.go:89] found id: "31cbedff1fe1e823d96eb4858d03eb2aa72e9a77c0ffc8651909298dfb2f2c47"
	I1002 22:19:36.168632 1475065 cri.go:89] found id: "7a11f9726f5e6292b466176dd31607960e89c89de96d69efda2e47f6d6bf355d"
	I1002 22:19:36.168635 1475065 cri.go:89] found id: "a71c5b7dea391f98f7c0a495ab685f227e17c98ff9866d2a9493764e53e86da5"
	I1002 22:19:36.168638 1475065 cri.go:89] found id: "abb55666df5bda820c6e6516b076744eb1e7ce0ae95bd5d8e416c3dca835aa10"
	I1002 22:19:36.168644 1475065 cri.go:89] found id: "e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86"
	I1002 22:19:36.168647 1475065 cri.go:89] found id: "e25231cdd4c4fd0dd69d5de90f20d75c497d5d57ba39068e59d1bd6e70ac3e8e"
	I1002 22:19:36.168650 1475065 cri.go:89] found id: ""
	I1002 22:19:36.168707 1475065 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:19:36.185100 1475065 out.go:203] 
	W1002 22:19:36.187998 1475065 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 22:19:36.188017 1475065 out.go:285] * 
	* 
	W1002 22:19:36.197688 1475065 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:19:36.200777 1475065 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-080134 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-080134
helpers_test.go:243: (dbg) docker inspect embed-certs-080134:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e",
	        "Created": "2025-10-02T22:16:37.741033428Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1471538,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:18:27.611831299Z",
	            "FinishedAt": "2025-10-02T22:18:26.035152734Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/hosts",
	        "LogPath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e-json.log",
	        "Name": "/embed-certs-080134",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-080134:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-080134",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e",
	                "LowerDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-080134",
	                "Source": "/var/lib/docker/volumes/embed-certs-080134/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-080134",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-080134",
	                "name.minikube.sigs.k8s.io": "embed-certs-080134",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e536d900c8d925da20d8c30ee4bd80b79cf90c7ffa0f4b18df861553e8c7dc8a",
	            "SandboxKey": "/var/run/docker/netns/e536d900c8d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34581"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34582"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34585"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34583"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34584"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-080134": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:43:f9:48:f5:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8a64f7585ec4aa24b8094a59cd780b3d89a1239c63c189f2097d1ca2a382a6ac",
	                    "EndpointID": "8bdd9c0e919f025ea03f667ba12d2b1f561b0193670a6dd17ff34c73118556d8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-080134",
	                        "d75a770c7fe5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-080134 -n embed-certs-080134
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-080134 -n embed-certs-080134: exit status 2 (326.259103ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-080134 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-080134 logs -n 25: (1.450474032s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-173127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ image   │ old-k8s-version-173127 image list --format=json                                                                                                                                                                                               │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ pause   │ -p old-k8s-version-173127 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-230628 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-230628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-230628 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ pause   │ -p default-k8s-diff-port-230628 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-080134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ delete  │ -p disable-driver-mounts-607037                                                                                                                                                                                                               │ disable-driver-mounts-607037 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ stop    │ -p embed-certs-080134 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable dashboard -p embed-certs-080134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ image   │ embed-certs-080134 image list --format=json                                                                                                                                                                                                   │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ pause   │ -p embed-certs-080134 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ stop    │ -p no-preload-975002 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:18:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:18:27.241377 1471394 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:18:27.241590 1471394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:18:27.241616 1471394 out.go:374] Setting ErrFile to fd 2...
	I1002 22:18:27.241635 1471394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:18:27.241916 1471394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:18:27.242344 1471394 out.go:368] Setting JSON to false
	I1002 22:18:27.243293 1471394 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25233,"bootTime":1759418275,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:18:27.243384 1471394 start.go:140] virtualization:  
	I1002 22:18:27.248909 1471394 out.go:179] * [embed-certs-080134] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:18:27.252367 1471394 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:18:27.252410 1471394 notify.go:220] Checking for updates...
	I1002 22:18:27.259480 1471394 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:18:27.262681 1471394 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:18:27.265720 1471394 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:18:27.268662 1471394 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:18:27.271638 1471394 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:18:27.275118 1471394 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:18:27.275745 1471394 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:18:27.313492 1471394 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:18:27.313615 1471394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:18:27.410660 1471394 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-02 22:18:27.397108084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:18:27.410768 1471394 docker.go:318] overlay module found
	I1002 22:18:27.413943 1471394 out.go:179] * Using the docker driver based on existing profile
	I1002 22:18:27.416868 1471394 start.go:304] selected driver: docker
	I1002 22:18:27.416888 1471394 start.go:924] validating driver "docker" against &{Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:27.416986 1471394 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:18:27.417693 1471394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:18:27.511305 1471394 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-02 22:18:27.50234397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:18:27.511649 1471394 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:18:27.511690 1471394 cni.go:84] Creating CNI manager for ""
	I1002 22:18:27.511762 1471394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:18:27.511806 1471394 start.go:348] cluster config:
	{Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:27.516554 1471394 out.go:179] * Starting "embed-certs-080134" primary control-plane node in "embed-certs-080134" cluster
	I1002 22:18:27.519781 1471394 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:18:27.522245 1471394 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:18:27.525376 1471394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:18:27.525450 1471394 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:18:27.525463 1471394 cache.go:58] Caching tarball of preloaded images
	I1002 22:18:27.525477 1471394 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:18:27.525612 1471394 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:18:27.525622 1471394 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:18:27.525733 1471394 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/config.json ...
	I1002 22:18:27.550567 1471394 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:18:27.550596 1471394 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:18:27.550618 1471394 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:18:27.550642 1471394 start.go:360] acquireMachinesLock for embed-certs-080134: {Name:mkb3c88b79da323c6aaa02ac6130cdaf0d74178c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:18:27.550700 1471394 start.go:364] duration metric: took 35.61µs to acquireMachinesLock for "embed-certs-080134"
	I1002 22:18:27.550727 1471394 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:18:27.550742 1471394 fix.go:54] fixHost starting: 
	I1002 22:18:27.551007 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:27.571312 1471394 fix.go:112] recreateIfNeeded on embed-certs-080134: state=Stopped err=<nil>
	W1002 22:18:27.571343 1471394 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:18:23.619484 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1002 22:18:23.646585 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 22:18:23.646707 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1002 22:18:23.698774 1469015 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1002 22:18:23.699086 1469015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:25.389499 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.74273701s)
	I1002 22:18:25.389529 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1002 22:18:25.389567 1469015 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1002 22:18:25.389646 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1002 22:18:25.389734 1469015 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.690609487s)
	I1002 22:18:25.389787 1469015 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1002 22:18:25.389818 1469015 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:25.389864 1469015 ssh_runner.go:195] Run: which crictl
	I1002 22:18:27.620887 1469015 ssh_runner.go:235] Completed: which crictl: (2.230998103s)
	I1002 22:18:27.620969 1469015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:27.621111 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.231448285s)
	I1002 22:18:27.621127 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1002 22:18:27.621143 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 22:18:27.621168 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 22:18:27.575412 1471394 out.go:252] * Restarting existing docker container for "embed-certs-080134" ...
	I1002 22:18:27.575540 1471394 cli_runner.go:164] Run: docker start embed-certs-080134
	I1002 22:18:27.864000 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:27.887726 1471394 kic.go:430] container "embed-certs-080134" state is running.
	I1002 22:18:27.888104 1471394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-080134
	I1002 22:18:27.917730 1471394 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/config.json ...
	I1002 22:18:27.917985 1471394 machine.go:93] provisionDockerMachine start ...
	I1002 22:18:27.918062 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:27.968040 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:27.968363 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:27.968372 1471394 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:18:27.971583 1471394 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:18:31.121910 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-080134
	
	I1002 22:18:31.121934 1471394 ubuntu.go:182] provisioning hostname "embed-certs-080134"
	I1002 22:18:31.121996 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:31.144299 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:31.144606 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:31.144618 1471394 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-080134 && echo "embed-certs-080134" | sudo tee /etc/hostname
	I1002 22:18:31.312251 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-080134
	
	I1002 22:18:31.312326 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:31.335751 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:31.336056 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:31.336080 1471394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-080134' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-080134/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-080134' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:18:31.486452 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:18:31.486527 1471394 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:18:31.486563 1471394 ubuntu.go:190] setting up certificates
	I1002 22:18:31.486604 1471394 provision.go:84] configureAuth start
	I1002 22:18:31.486710 1471394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-080134
	I1002 22:18:31.511138 1471394 provision.go:143] copyHostCerts
	I1002 22:18:31.511211 1471394 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:18:31.511228 1471394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:18:31.511300 1471394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:18:31.511399 1471394 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:18:31.511404 1471394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:18:31.511430 1471394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:18:31.511493 1471394 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:18:31.511498 1471394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:18:31.511522 1471394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:18:31.511575 1471394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.embed-certs-080134 san=[127.0.0.1 192.168.85.2 embed-certs-080134 localhost minikube]
	I1002 22:18:31.893293 1471394 provision.go:177] copyRemoteCerts
	I1002 22:18:31.893359 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:18:31.893409 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:31.911758 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.014147 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:18:32.050758 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:18:32.080185 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 22:18:32.105189 1471394 provision.go:87] duration metric: took 618.544299ms to configureAuth
	I1002 22:18:32.105275 1471394 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:18:32.105519 1471394 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:18:32.105705 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.124928 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:32.125251 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:32.125267 1471394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:18:29.233061 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.611867781s)
	I1002 22:18:29.233089 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1002 22:18:29.233107 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 22:18:29.233152 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 22:18:29.233217 1469015 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.612233888s)
	I1002 22:18:29.233255 1469015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:30.533897 1469015 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.300616469s)
	I1002 22:18:30.533970 1469015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:30.534151 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.300983641s)
	I1002 22:18:30.534167 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1002 22:18:30.534184 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 22:18:30.534215 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 22:18:32.211052 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.676812997s)
	I1002 22:18:32.211081 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1002 22:18:32.211098 1469015 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1002 22:18:32.211148 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1002 22:18:32.211195 1469015 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.677204678s)
	I1002 22:18:32.211239 1469015 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 22:18:32.211326 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1002 22:18:32.472964 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:18:32.472994 1471394 machine.go:96] duration metric: took 4.5549988s to provisionDockerMachine
	I1002 22:18:32.473005 1471394 start.go:293] postStartSetup for "embed-certs-080134" (driver="docker")
	I1002 22:18:32.473016 1471394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:18:32.473075 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:18:32.473112 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.504127 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.607008 1471394 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:18:32.610932 1471394 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:18:32.610957 1471394 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:18:32.610967 1471394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:18:32.611017 1471394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:18:32.611092 1471394 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:18:32.611198 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:18:32.619379 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:18:32.638857 1471394 start.go:296] duration metric: took 165.837176ms for postStartSetup
	I1002 22:18:32.639023 1471394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:18:32.639097 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.661986 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.756088 1471394 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:18:32.763175 1471394 fix.go:56] duration metric: took 5.212432494s for fixHost
	I1002 22:18:32.763197 1471394 start.go:83] releasing machines lock for "embed-certs-080134", held for 5.212483004s
	I1002 22:18:32.763275 1471394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-080134
	I1002 22:18:32.789872 1471394 ssh_runner.go:195] Run: cat /version.json
	I1002 22:18:32.789929 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.790254 1471394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:18:32.790302 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.828586 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.836099 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:33.043006 1471394 ssh_runner.go:195] Run: systemctl --version
	I1002 22:18:33.050763 1471394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:18:33.108072 1471394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:18:33.113998 1471394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:18:33.114128 1471394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:18:33.124321 1471394 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:18:33.124388 1471394 start.go:495] detecting cgroup driver to use...
	I1002 22:18:33.124436 1471394 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:18:33.124526 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:18:33.142258 1471394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:18:33.157706 1471394 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:18:33.157778 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:18:33.175228 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:18:33.190313 1471394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:18:33.346607 1471394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:18:33.532428 1471394 docker.go:234] disabling docker service ...
	I1002 22:18:33.532502 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:18:33.563245 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:18:33.583139 1471394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:18:33.750538 1471394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:18:33.909095 1471394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:18:33.926180 1471394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:18:33.956759 1471394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:18:33.956833 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:33.974951 1471394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:18:33.975030 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:33.988735 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:33.998132 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.011278 1471394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:18:34.023270 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.038545 1471394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.056908 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.071629 1471394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:18:34.080896 1471394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:18:34.089541 1471394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:34.286970 1471394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:18:34.854866 1471394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:18:34.854996 1471394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:18:34.860113 1471394 start.go:563] Will wait 60s for crictl version
	I1002 22:18:34.860263 1471394 ssh_runner.go:195] Run: which crictl
	I1002 22:18:34.865256 1471394 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:18:34.903231 1471394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:18:34.903362 1471394 ssh_runner.go:195] Run: crio --version
	I1002 22:18:34.963890 1471394 ssh_runner.go:195] Run: crio --version
	I1002 22:18:35.003885 1471394 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:18:35.007109 1471394 cli_runner.go:164] Run: docker network inspect embed-certs-080134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:18:35.031834 1471394 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 22:18:35.036431 1471394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:18:35.049506 1471394 kubeadm.go:883] updating cluster {Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:18:35.049618 1471394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:18:35.049727 1471394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:18:35.097850 1471394 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:18:35.097880 1471394 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:18:35.097944 1471394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:18:35.133423 1471394 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:18:35.133449 1471394 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:18:35.133457 1471394 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 22:18:35.133555 1471394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-080134 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:18:35.133645 1471394 ssh_runner.go:195] Run: crio config
	I1002 22:18:35.204040 1471394 cni.go:84] Creating CNI manager for ""
	I1002 22:18:35.204063 1471394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:18:35.204081 1471394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:18:35.204128 1471394 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-080134 NodeName:embed-certs-080134 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:18:35.204371 1471394 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-080134"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:18:35.204464 1471394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:18:35.213713 1471394 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:18:35.213818 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:18:35.222933 1471394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1002 22:18:35.247563 1471394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:18:35.270076 1471394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1002 22:18:35.285413 1471394 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:18:35.289522 1471394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:18:35.312724 1471394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:35.467847 1471394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:18:35.484391 1471394 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134 for IP: 192.168.85.2
	I1002 22:18:35.484464 1471394 certs.go:195] generating shared ca certs ...
	I1002 22:18:35.484494 1471394 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:35.484661 1471394 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:18:35.484747 1471394 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:18:35.484772 1471394 certs.go:257] generating profile certs ...
	I1002 22:18:35.484898 1471394 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/client.key
	I1002 22:18:35.485001 1471394 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/apiserver.key.248cab64
	I1002 22:18:35.485075 1471394 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/proxy-client.key
	I1002 22:18:35.485215 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:18:35.485273 1471394 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:18:35.485298 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:18:35.485348 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:18:35.485397 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:18:35.485447 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:18:35.485514 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:18:35.486237 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:18:35.506450 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:18:35.567521 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:18:35.627879 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:18:35.730475 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 22:18:35.779333 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:18:35.819379 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:18:35.844088 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 22:18:35.873678 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:18:35.894361 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:18:35.912786 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:18:35.930978 1471394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:18:35.944370 1471394 ssh_runner.go:195] Run: openssl version
	I1002 22:18:35.951356 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:18:35.959750 1471394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:35.964120 1471394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:35.964262 1471394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:36.007111 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:18:36.016761 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:18:36.026506 1471394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:18:36.031576 1471394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:18:36.031699 1471394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:18:36.075194 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:18:36.086066 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:18:36.095588 1471394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:18:36.100529 1471394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:18:36.100692 1471394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:18:36.144821 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:18:36.153776 1471394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:18:36.158258 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:18:36.245549 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:18:36.330215 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:18:36.420875 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:18:36.568928 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:18:36.711006 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:18:36.796194 1471394 kubeadm.go:400] StartCluster: {Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:36.796345 1471394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:18:36.796474 1471394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:18:36.892149 1471394 cri.go:89] found id: "31cbedff1fe1e823d96eb4858d03eb2aa72e9a77c0ffc8651909298dfb2f2c47"
	I1002 22:18:36.892223 1471394 cri.go:89] found id: "7a11f9726f5e6292b466176dd31607960e89c89de96d69efda2e47f6d6bf355d"
	I1002 22:18:36.892241 1471394 cri.go:89] found id: "a71c5b7dea391f98f7c0a495ab685f227e17c98ff9866d2a9493764e53e86da5"
	I1002 22:18:36.892267 1471394 cri.go:89] found id: "abb55666df5bda820c6e6516b076744eb1e7ce0ae95bd5d8e416c3dca835aa10"
	I1002 22:18:36.892300 1471394 cri.go:89] found id: ""
	I1002 22:18:36.892388 1471394 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:18:36.915069 1471394 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:18:36Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:18:36.915204 1471394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:18:36.932678 1471394 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:18:36.932756 1471394 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:18:36.932847 1471394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:18:36.947880 1471394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:18:36.948438 1471394 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-080134" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:18:36.948600 1471394 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-080134" cluster setting kubeconfig missing "embed-certs-080134" context setting]
	I1002 22:18:36.948954 1471394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:36.950584 1471394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:18:36.961307 1471394 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 22:18:36.961392 1471394 kubeadm.go:601] duration metric: took 28.610218ms to restartPrimaryControlPlane
	I1002 22:18:36.961415 1471394 kubeadm.go:402] duration metric: took 165.231557ms to StartCluster
	I1002 22:18:36.961458 1471394 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:36.961553 1471394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:18:36.962655 1471394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:36.962911 1471394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:18:36.963445 1471394 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:18:36.963528 1471394 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-080134"
	I1002 22:18:36.963542 1471394 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-080134"
	W1002 22:18:36.963547 1471394 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:18:36.963571 1471394 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:18:36.964072 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:36.964329 1471394 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:18:36.964414 1471394 addons.go:69] Setting dashboard=true in profile "embed-certs-080134"
	I1002 22:18:36.964442 1471394 addons.go:238] Setting addon dashboard=true in "embed-certs-080134"
	W1002 22:18:36.964463 1471394 addons.go:247] addon dashboard should already be in state true
	I1002 22:18:36.964512 1471394 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:18:36.964975 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:36.965467 1471394 addons.go:69] Setting default-storageclass=true in profile "embed-certs-080134"
	I1002 22:18:36.965492 1471394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-080134"
	I1002 22:18:36.965772 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:36.969471 1471394 out.go:179] * Verifying Kubernetes components...
	I1002 22:18:36.974201 1471394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:37.015067 1471394 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:37.021148 1471394 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:18:37.021174 1471394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:18:37.021254 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:37.024388 1471394 addons.go:238] Setting addon default-storageclass=true in "embed-certs-080134"
	W1002 22:18:37.024435 1471394 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:18:37.024487 1471394 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:18:37.025002 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:37.054125 1471394 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:18:37.059462 1471394 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 22:18:37.067041 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:18:37.067076 1471394 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:18:37.067158 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:37.075265 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:37.098924 1471394 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:18:37.098943 1471394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:18:37.099010 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:37.117464 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:37.143671 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:37.089260 1469015 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (4.877913042s)
	I1002 22:18:37.089296 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1002 22:18:37.089321 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1002 22:18:37.089439 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.878271526s)
	I1002 22:18:37.089449 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1002 22:18:37.236522 1469015 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 22:18:37.236596 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 22:18:38.212691 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 22:18:38.212727 1469015 cache_images.go:124] Successfully loaded all cached images
	I1002 22:18:38.212734 1469015 cache_images.go:93] duration metric: took 15.924038027s to LoadCachedImages
	I1002 22:18:38.212745 1469015 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 22:18:38.212836 1469015 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-975002 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:18:38.212921 1469015 ssh_runner.go:195] Run: crio config
	I1002 22:18:38.318444 1469015 cni.go:84] Creating CNI manager for ""
	I1002 22:18:38.318513 1469015 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:18:38.318545 1469015 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:18:38.318596 1469015 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-975002 NodeName:no-preload-975002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:18:38.318772 1469015 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-975002"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:18:38.318873 1469015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:18:38.331027 1469015 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1002 22:18:38.331144 1469015 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1002 22:18:38.348565 1469015 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1002 22:18:38.348774 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1002 22:18:38.349315 1469015 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1002 22:18:38.349739 1469015 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1002 22:18:38.355088 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1002 22:18:38.355120 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1002 22:18:37.478882 1471394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:18:37.511961 1471394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:18:37.569298 1471394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:18:37.600558 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:18:37.600586 1471394 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:18:37.746513 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:18:37.746540 1471394 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:18:37.856796 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:18:37.856820 1471394 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:18:37.949027 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:18:37.949050 1471394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:18:38.030540 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:18:38.030566 1471394 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:18:38.065948 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:18:38.065977 1471394 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:18:38.117556 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:18:38.117636 1471394 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:18:38.153468 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:18:38.153488 1471394 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:18:38.174710 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:18:38.174731 1471394 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:18:38.194835 1471394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:18:39.419779 1469015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:18:39.450053 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1002 22:18:39.460331 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1002 22:18:39.460370 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1002 22:18:39.645411 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1002 22:18:39.675855 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1002 22:18:39.675899 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1002 22:18:40.325202 1469015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:18:40.335086 1469015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 22:18:40.357734 1469015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:18:40.376320 1469015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 22:18:40.402686 1469015 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:18:40.407035 1469015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:18:40.426631 1469015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:40.633489 1469015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:18:40.673001 1469015 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002 for IP: 192.168.76.2
	I1002 22:18:40.673026 1469015 certs.go:195] generating shared ca certs ...
	I1002 22:18:40.673042 1469015 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:40.673183 1469015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:18:40.673227 1469015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:18:40.673237 1469015 certs.go:257] generating profile certs ...
	I1002 22:18:40.673295 1469015 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.key
	I1002 22:18:40.673312 1469015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt with IP's: []
	I1002 22:18:41.128375 1469015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt ...
	I1002 22:18:41.128406 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: {Name:mkfb502c73b4ad79c2095821374cc38c54249654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.128600 1469015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.key ...
	I1002 22:18:41.128616 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.key: {Name:mkd38cc7fd83e5057b4c9d7fd2e30313c24ba9cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.128721 1469015 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57
	I1002 22:18:41.128741 1469015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 22:18:41.508807 1469015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57 ...
	I1002 22:18:41.508837 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57: {Name:mkdebba222698e9ad33dbf8d5a6cf31ef95e43dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.509040 1469015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57 ...
	I1002 22:18:41.509056 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57: {Name:mk918c9e23e54fec10949ffff53c7a04638071be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.509152 1469015 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt
	I1002 22:18:41.509231 1469015 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key
	I1002 22:18:41.509292 1469015 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key
	I1002 22:18:41.509314 1469015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt with IP's: []
	I1002 22:18:41.602000 1469015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt ...
	I1002 22:18:41.602055 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt: {Name:mk609465dd816034a4031d70c3a4ad97b9295f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.602227 1469015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key ...
	I1002 22:18:41.602243 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key: {Name:mk90272abbfdd0f5e7ed179e6e268a568e1c3a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.602425 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:18:41.602469 1469015 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:18:41.602480 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:18:41.602507 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:18:41.602530 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:18:41.602553 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:18:41.602595 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:18:41.603153 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:18:41.622994 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:18:41.668729 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:18:41.733335 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:18:41.770774 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 22:18:41.811741 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:18:41.833957 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:18:41.866381 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:18:41.897084 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:18:41.933013 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:18:41.975831 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:18:42.007096 1469015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:18:42.040238 1469015 ssh_runner.go:195] Run: openssl version
	I1002 22:18:42.047701 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:18:42.058795 1469015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:18:42.063942 1469015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:18:42.064034 1469015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:18:42.124036 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:18:42.136576 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:18:42.154191 1469015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:42.160428 1469015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:42.160611 1469015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:42.210341 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:18:42.229193 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:18:42.246105 1469015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:18:42.252813 1469015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:18:42.252965 1469015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:18:42.302268 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:18:42.317825 1469015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:18:42.325788 1469015 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 22:18:42.325847 1469015 kubeadm.go:400] StartCluster: {Name:no-preload-975002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:42.325922 1469015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:18:42.325984 1469015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:18:42.371270 1469015 cri.go:89] found id: ""
	I1002 22:18:42.371352 1469015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:18:42.382662 1469015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:18:42.392183 1469015 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:18:42.392247 1469015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:18:42.411308 1469015 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:18:42.411327 1469015 kubeadm.go:157] found existing configuration files:
	
	I1002 22:18:42.411379 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:18:42.423455 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:18:42.423521 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:18:42.447291 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:18:42.462645 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:18:42.462764 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:18:42.482289 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:18:42.502444 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:18:42.502557 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:18:42.520099 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:18:42.532069 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:18:42.532138 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:18:42.541152 1469015 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:18:42.612796 1469015 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:18:42.613219 1469015 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:18:42.647545 1469015 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:18:42.647629 1469015 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:18:42.647681 1469015 kubeadm.go:318] OS: Linux
	I1002 22:18:42.647741 1469015 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:18:42.647801 1469015 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:18:42.647855 1469015 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:18:42.647910 1469015 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:18:42.647967 1469015 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:18:42.648022 1469015 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:18:42.648073 1469015 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:18:42.648128 1469015 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:18:42.648181 1469015 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:18:42.742836 1469015 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:18:42.742956 1469015 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:18:42.743057 1469015 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:18:42.766454 1469015 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:18:42.772327 1469015 out.go:252]   - Generating certificates and keys ...
	I1002 22:18:42.772429 1469015 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:18:42.772510 1469015 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:18:43.010659 1469015 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 22:18:47.782533 1471394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.303615936s)
	I1002 22:18:47.782597 1471394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.27061241s)
	I1002 22:18:47.782928 1471394 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.21360386s)
	I1002 22:18:47.782957 1471394 node_ready.go:35] waiting up to 6m0s for node "embed-certs-080134" to be "Ready" ...
	I1002 22:18:47.783211 1471394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.588348325s)
	I1002 22:18:47.786416 1471394 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-080134 addons enable metrics-server
	
	I1002 22:18:47.810901 1471394 node_ready.go:49] node "embed-certs-080134" is "Ready"
	I1002 22:18:47.810980 1471394 node_ready.go:38] duration metric: took 28.010163ms for node "embed-certs-080134" to be "Ready" ...
	I1002 22:18:47.811008 1471394 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:18:47.811097 1471394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:18:47.829126 1471394 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1002 22:18:44.741285 1469015 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 22:18:45.042415 1469015 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 22:18:45.580611 1469015 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 22:18:45.990281 1469015 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 22:18:45.991552 1469015 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-975002] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:18:46.331914 1469015 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 22:18:46.334626 1469015 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-975002] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:18:46.859365 1469015 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 22:18:46.950416 1469015 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 22:18:47.286760 1469015 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:18:47.287351 1469015 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:18:47.922882 1469015 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:18:48.297144 1469015 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:18:48.621705 1469015 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:18:48.663782 1469015 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:18:48.920879 1469015 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:18:48.921552 1469015 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:18:48.924242 1469015 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:18:47.830679 1471394 api_server.go:72] duration metric: took 10.867713333s to wait for apiserver process to appear ...
	I1002 22:18:47.830745 1471394 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:18:47.830784 1471394 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:18:47.832587 1471394 addons.go:514] duration metric: took 10.869132458s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1002 22:18:47.840925 1471394 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:18:47.842205 1471394 api_server.go:141] control plane version: v1.34.1
	I1002 22:18:47.842226 1471394 api_server.go:131] duration metric: took 11.462399ms to wait for apiserver health ...
	I1002 22:18:47.842235 1471394 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:18:47.846620 1471394 system_pods.go:59] 8 kube-system pods found
	I1002 22:18:47.846652 1471394 system_pods.go:61] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:18:47.846662 1471394 system_pods.go:61] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:18:47.846668 1471394 system_pods.go:61] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:18:47.846676 1471394 system_pods.go:61] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:18:47.846683 1471394 system_pods.go:61] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:18:47.846687 1471394 system_pods.go:61] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:18:47.846694 1471394 system_pods.go:61] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:18:47.846698 1471394 system_pods.go:61] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Running
	I1002 22:18:47.846703 1471394 system_pods.go:74] duration metric: took 4.462467ms to wait for pod list to return data ...
	I1002 22:18:47.846711 1471394 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:18:47.853249 1471394 default_sa.go:45] found service account: "default"
	I1002 22:18:47.853325 1471394 default_sa.go:55] duration metric: took 6.606684ms for default service account to be created ...
	I1002 22:18:47.853348 1471394 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:18:47.863524 1471394 system_pods.go:86] 8 kube-system pods found
	I1002 22:18:47.863562 1471394 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:18:47.863572 1471394 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:18:47.863578 1471394 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:18:47.863586 1471394 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:18:47.863592 1471394 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:18:47.863600 1471394 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:18:47.863607 1471394 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:18:47.863611 1471394 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Running
	I1002 22:18:47.863621 1471394 system_pods.go:126] duration metric: took 10.255127ms to wait for k8s-apps to be running ...
	I1002 22:18:47.863635 1471394 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:18:47.863689 1471394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:18:47.880954 1471394 system_svc.go:56] duration metric: took 17.310589ms WaitForService to wait for kubelet
	I1002 22:18:47.880983 1471394 kubeadm.go:586] duration metric: took 10.918021245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:18:47.881002 1471394 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:18:47.884495 1471394 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:18:47.884528 1471394 node_conditions.go:123] node cpu capacity is 2
	I1002 22:18:47.884541 1471394 node_conditions.go:105] duration metric: took 3.532088ms to run NodePressure ...
	I1002 22:18:47.884553 1471394 start.go:241] waiting for startup goroutines ...
	I1002 22:18:47.884565 1471394 start.go:246] waiting for cluster config update ...
	I1002 22:18:47.884583 1471394 start.go:255] writing updated cluster config ...
	I1002 22:18:47.884874 1471394 ssh_runner.go:195] Run: rm -f paused
	I1002 22:18:47.888950 1471394 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:18:47.892876 1471394 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 22:18:49.899735 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:18:48.927837 1469015 out.go:252]   - Booting up control plane ...
	I1002 22:18:48.927941 1469015 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:18:48.928022 1469015 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:18:48.928092 1469015 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:18:48.965877 1469015 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:18:48.965994 1469015 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:18:48.977967 1469015 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:18:48.980386 1469015 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:18:48.980738 1469015 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:18:49.132675 1469015 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:18:49.132802 1469015 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:18:50.138969 1469015 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001293211s
	I1002 22:18:50.139082 1469015 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:18:50.139168 1469015 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 22:18:50.139262 1469015 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:18:50.139344 1469015 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1002 22:18:52.398467 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:18:54.403622 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:18:56.898368 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:18:57.339828 1469015 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 7.202235828s
	I1002 22:18:57.929769 1469015 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.792573735s
	I1002 22:18:59.139750 1469015 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.002386073s
	I1002 22:18:59.165026 1469015 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 22:18:59.183011 1469015 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 22:18:59.201173 1469015 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 22:18:59.201382 1469015 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-975002 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 22:18:59.219148 1469015 kubeadm.go:318] [bootstrap-token] Using token: hf2oiw.qzyeh524x9w4di8u
	I1002 22:18:59.222617 1469015 out.go:252]   - Configuring RBAC rules ...
	I1002 22:18:59.222757 1469015 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 22:18:59.229488 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 22:18:59.241677 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 22:18:59.248084 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 22:18:59.253811 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 22:18:59.259740 1469015 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 22:18:59.550870 1469015 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 22:19:00.226768 1469015 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 22:19:00.552936 1469015 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 22:19:00.558581 1469015 kubeadm.go:318] 
	I1002 22:19:00.558664 1469015 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 22:19:00.558671 1469015 kubeadm.go:318] 
	I1002 22:19:00.558753 1469015 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 22:19:00.558758 1469015 kubeadm.go:318] 
	I1002 22:19:00.558785 1469015 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 22:19:00.558847 1469015 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 22:19:00.558901 1469015 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 22:19:00.558905 1469015 kubeadm.go:318] 
	I1002 22:19:00.558962 1469015 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 22:19:00.558967 1469015 kubeadm.go:318] 
	I1002 22:19:00.559017 1469015 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 22:19:00.559023 1469015 kubeadm.go:318] 
	I1002 22:19:00.559078 1469015 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 22:19:00.559178 1469015 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 22:19:00.559252 1469015 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 22:19:00.559257 1469015 kubeadm.go:318] 
	I1002 22:19:00.559349 1469015 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 22:19:00.559430 1469015 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 22:19:00.559435 1469015 kubeadm.go:318] 
	I1002 22:19:00.559524 1469015 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token hf2oiw.qzyeh524x9w4di8u \
	I1002 22:19:00.559635 1469015 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 22:19:00.559656 1469015 kubeadm.go:318] 	--control-plane 
	I1002 22:19:00.559661 1469015 kubeadm.go:318] 
	I1002 22:19:00.559761 1469015 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 22:19:00.559766 1469015 kubeadm.go:318] 
	I1002 22:19:00.559852 1469015 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token hf2oiw.qzyeh524x9w4di8u \
	I1002 22:19:00.559959 1469015 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 22:19:00.562829 1469015 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:19:00.563068 1469015 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:19:00.563179 1469015 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:19:00.563275 1469015 cni.go:84] Creating CNI manager for ""
	I1002 22:19:00.563298 1469015 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:19:00.566853 1469015 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1002 22:18:58.900099 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:00.902930 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:19:00.569902 1469015 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:19:00.588293 1469015 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 22:19:00.588319 1469015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 22:19:00.638521 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:19:01.293210 1469015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:19:01.293291 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:01.293358 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-975002 minikube.k8s.io/updated_at=2025_10_02T22_19_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=no-preload-975002 minikube.k8s.io/primary=true
	I1002 22:19:01.602415 1469015 ops.go:34] apiserver oom_adj: -16
	I1002 22:19:01.602540 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:02.102816 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:02.603363 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:03.102729 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:03.602953 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:04.102788 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:04.603260 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:05.102657 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:05.269135 1469015 kubeadm.go:1113] duration metric: took 3.97589838s to wait for elevateKubeSystemPrivileges
	I1002 22:19:05.269177 1469015 kubeadm.go:402] duration metric: took 22.943334557s to StartCluster
	I1002 22:19:05.269206 1469015 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:05.269291 1469015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:19:05.271437 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:05.271828 1469015 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:19:05.272190 1469015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:19:05.272370 1469015 config.go:182] Loaded profile config "no-preload-975002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:19:05.272387 1469015 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:19:05.272511 1469015 addons.go:69] Setting storage-provisioner=true in profile "no-preload-975002"
	I1002 22:19:05.272534 1469015 addons.go:238] Setting addon storage-provisioner=true in "no-preload-975002"
	I1002 22:19:05.272572 1469015 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:19:05.272672 1469015 addons.go:69] Setting default-storageclass=true in profile "no-preload-975002"
	I1002 22:19:05.272684 1469015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-975002"
	I1002 22:19:05.273163 1469015 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:05.273818 1469015 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:05.275562 1469015 out.go:179] * Verifying Kubernetes components...
	I1002 22:19:05.278736 1469015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:19:05.328979 1469015 addons.go:238] Setting addon default-storageclass=true in "no-preload-975002"
	I1002 22:19:05.329094 1469015 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:19:05.329752 1469015 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:05.335568 1469015 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:19:05.338629 1469015 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:19:05.338666 1469015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:19:05.338754 1469015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:05.386885 1469015 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:19:05.386920 1469015 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:19:05.387019 1469015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:05.427655 1469015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34576 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:05.431370 1469015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34576 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:05.903804 1469015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:19:05.908412 1469015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:19:05.908596 1469015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 22:19:05.928210 1469015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:19:06.745502 1469015 node_ready.go:35] waiting up to 6m0s for node "no-preload-975002" to be "Ready" ...
	I1002 22:19:06.745830 1469015 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 22:19:06.806136 1469015 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1002 22:19:03.399969 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:05.423860 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:19:06.809121 1469015 addons.go:514] duration metric: took 1.53671385s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 22:19:07.256252 1469015 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-975002" context rescaled to 1 replicas
	W1002 22:19:07.899654 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:10.398010 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:08.748541 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:10.749138 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:12.749360 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:12.398627 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:14.898925 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:15.248829 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:17.749454 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:17.399980 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:19.899037 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:19:20.904548 1471394 pod_ready.go:94] pod "coredns-66bc5c9577-n47rb" is "Ready"
	I1002 22:19:20.904576 1471394 pod_ready.go:86] duration metric: took 33.011675469s for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.907496 1471394 pod_ready.go:83] waiting for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.911938 1471394 pod_ready.go:94] pod "etcd-embed-certs-080134" is "Ready"
	I1002 22:19:20.911972 1471394 pod_ready.go:86] duration metric: took 4.45048ms for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.914304 1471394 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.918893 1471394 pod_ready.go:94] pod "kube-apiserver-embed-certs-080134" is "Ready"
	I1002 22:19:20.918920 1471394 pod_ready.go:86] duration metric: took 4.591261ms for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.921085 1471394 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.097644 1471394 pod_ready.go:94] pod "kube-controller-manager-embed-certs-080134" is "Ready"
	I1002 22:19:21.097671 1471394 pod_ready.go:86] duration metric: took 176.560085ms for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.297761 1471394 pod_ready.go:83] waiting for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.696824 1471394 pod_ready.go:94] pod "kube-proxy-7lq28" is "Ready"
	I1002 22:19:21.696855 1471394 pod_ready.go:86] duration metric: took 399.06335ms for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.897330 1471394 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.296795 1471394 pod_ready.go:94] pod "kube-scheduler-embed-certs-080134" is "Ready"
	I1002 22:19:22.296828 1471394 pod_ready.go:86] duration metric: took 399.468273ms for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.296841 1471394 pod_ready.go:40] duration metric: took 34.407859012s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:19:22.362673 1471394 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:19:22.365724 1471394 out.go:179] * Done! kubectl is now configured to use "embed-certs-080134" cluster and "default" namespace by default
	I1002 22:19:20.248320 1469015 node_ready.go:49] node "no-preload-975002" is "Ready"
	I1002 22:19:20.248352 1469015 node_ready.go:38] duration metric: took 13.502816799s for node "no-preload-975002" to be "Ready" ...
	I1002 22:19:20.248367 1469015 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:19:20.248430 1469015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:19:20.261347 1469015 api_server.go:72] duration metric: took 14.989475418s to wait for apiserver process to appear ...
	I1002 22:19:20.261372 1469015 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:19:20.261391 1469015 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:19:20.269713 1469015 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 22:19:20.271031 1469015 api_server.go:141] control plane version: v1.34.1
	I1002 22:19:20.271053 1469015 api_server.go:131] duration metric: took 9.67464ms to wait for apiserver health ...
	I1002 22:19:20.271062 1469015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:19:20.274608 1469015 system_pods.go:59] 8 kube-system pods found
	I1002 22:19:20.274640 1469015 system_pods.go:61] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.274646 1469015 system_pods.go:61] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.274654 1469015 system_pods.go:61] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.274660 1469015 system_pods.go:61] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.274665 1469015 system_pods.go:61] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.274670 1469015 system_pods.go:61] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.274676 1469015 system_pods.go:61] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.274683 1469015 system_pods.go:61] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:19:20.274687 1469015 system_pods.go:74] duration metric: took 3.620185ms to wait for pod list to return data ...
	I1002 22:19:20.274695 1469015 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:19:20.280725 1469015 default_sa.go:45] found service account: "default"
	I1002 22:19:20.280751 1469015 default_sa.go:55] duration metric: took 6.050599ms for default service account to be created ...
	I1002 22:19:20.280761 1469015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:19:20.283528 1469015 system_pods.go:86] 8 kube-system pods found
	I1002 22:19:20.283561 1469015 system_pods.go:89] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.283568 1469015 system_pods.go:89] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.283574 1469015 system_pods.go:89] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.283579 1469015 system_pods.go:89] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.283584 1469015 system_pods.go:89] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.283588 1469015 system_pods.go:89] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.283592 1469015 system_pods.go:89] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.283598 1469015 system_pods.go:89] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:19:20.283619 1469015 retry.go:31] will retry after 285.800121ms: missing components: kube-dns
	I1002 22:19:20.580010 1469015 system_pods.go:86] 8 kube-system pods found
	I1002 22:19:20.580047 1469015 system_pods.go:89] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.580054 1469015 system_pods.go:89] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.580063 1469015 system_pods.go:89] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.580067 1469015 system_pods.go:89] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.580072 1469015 system_pods.go:89] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.580077 1469015 system_pods.go:89] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.580081 1469015 system_pods.go:89] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.580091 1469015 system_pods.go:89] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:19:20.580106 1469015 retry.go:31] will retry after 343.665312ms: missing components: kube-dns
	I1002 22:19:20.933271 1469015 system_pods.go:86] 8 kube-system pods found
	I1002 22:19:20.933307 1469015 system_pods.go:89] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.933315 1469015 system_pods.go:89] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.933321 1469015 system_pods.go:89] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.933325 1469015 system_pods.go:89] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.933330 1469015 system_pods.go:89] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.933334 1469015 system_pods.go:89] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.933340 1469015 system_pods.go:89] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.933344 1469015 system_pods.go:89] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Running
	I1002 22:19:20.933352 1469015 system_pods.go:126] duration metric: took 652.584288ms to wait for k8s-apps to be running ...
	I1002 22:19:20.933360 1469015 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:19:20.933419 1469015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:19:20.954574 1469015 system_svc.go:56] duration metric: took 21.205386ms WaitForService to wait for kubelet
	I1002 22:19:20.954602 1469015 kubeadm.go:586] duration metric: took 15.682736643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:19:20.954621 1469015 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:19:20.969765 1469015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:19:20.969799 1469015 node_conditions.go:123] node cpu capacity is 2
	I1002 22:19:20.969819 1469015 node_conditions.go:105] duration metric: took 15.185958ms to run NodePressure ...
	I1002 22:19:20.969836 1469015 start.go:241] waiting for startup goroutines ...
	I1002 22:19:20.969847 1469015 start.go:246] waiting for cluster config update ...
	I1002 22:19:20.969862 1469015 start.go:255] writing updated cluster config ...
	I1002 22:19:20.970232 1469015 ssh_runner.go:195] Run: rm -f paused
	I1002 22:19:20.977402 1469015 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:19:20.983646 1469015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rj4bn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.989548 1469015 pod_ready.go:94] pod "coredns-66bc5c9577-rj4bn" is "Ready"
	I1002 22:19:21.989572 1469015 pod_ready.go:86] duration metric: took 1.005900106s for pod "coredns-66bc5c9577-rj4bn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.992406 1469015 pod_ready.go:83] waiting for pod "etcd-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.997134 1469015 pod_ready.go:94] pod "etcd-no-preload-975002" is "Ready"
	I1002 22:19:21.997213 1469015 pod_ready.go:86] duration metric: took 4.778466ms for pod "etcd-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.999604 1469015 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.008465 1469015 pod_ready.go:94] pod "kube-apiserver-no-preload-975002" is "Ready"
	I1002 22:19:22.008496 1469015 pod_ready.go:86] duration metric: took 8.861043ms for pod "kube-apiserver-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.011576 1469015 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.187884 1469015 pod_ready.go:94] pod "kube-controller-manager-no-preload-975002" is "Ready"
	I1002 22:19:22.187912 1469015 pod_ready.go:86] duration metric: took 176.308323ms for pod "kube-controller-manager-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.393864 1469015 pod_ready.go:83] waiting for pod "kube-proxy-lzzt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.787934 1469015 pod_ready.go:94] pod "kube-proxy-lzzt4" is "Ready"
	I1002 22:19:22.787962 1469015 pod_ready.go:86] duration metric: took 394.069322ms for pod "kube-proxy-lzzt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.988369 1469015 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:23.387713 1469015 pod_ready.go:94] pod "kube-scheduler-no-preload-975002" is "Ready"
	I1002 22:19:23.387749 1469015 pod_ready.go:86] duration metric: took 399.351959ms for pod "kube-scheduler-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:23.387762 1469015 pod_ready.go:40] duration metric: took 2.410324146s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:19:23.436907 1469015 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:19:23.440526 1469015 out.go:179] * Done! kubectl is now configured to use "no-preload-975002" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.773875109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.781620167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.782809176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.799975422Z" level=info msg="Created container e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v/dashboard-metrics-scraper" id=35f36a5a-1e61-4f17-a6f5-a6271aa4ffb5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.80236143Z" level=info msg="Starting container: e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86" id=f59b01e4-f225-4cc3-a0a4-37a02a5a3612 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.805298789Z" level=info msg="Started container" PID=1648 containerID=e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v/dashboard-metrics-scraper id=f59b01e4-f225-4cc3-a0a4-37a02a5a3612 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00659fd51707924d8df8c83d07f15861e8e8004af7ab2b4fd4e38114edbc1397
	Oct 02 22:19:21 embed-certs-080134 conmon[1646]: conmon e2059f55d5f7c75a7bcb <ninfo>: container 1648 exited with status 1
	Oct 02 22:19:22 embed-certs-080134 crio[654]: time="2025-10-02T22:19:22.246767807Z" level=info msg="Removing container: 32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3" id=5549ccc3-6c59-4254-811b-307e858d0839 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:19:22 embed-certs-080134 crio[654]: time="2025-10-02T22:19:22.257264529Z" level=info msg="Error loading conmon cgroup of container 32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3: cgroup deleted" id=5549ccc3-6c59-4254-811b-307e858d0839 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:19:22 embed-certs-080134 crio[654]: time="2025-10-02T22:19:22.261829837Z" level=info msg="Removed container 32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v/dashboard-metrics-scraper" id=5549ccc3-6c59-4254-811b-307e858d0839 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.162535083Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.167439077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.167473858Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.167496339Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.170680848Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.170714382Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.170736781Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.174947167Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.174983696Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.175006637Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.17828172Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.178351659Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.1783768Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.181540673Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.181578941Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e2059f55d5f7c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   2                   00659fd517079       dashboard-metrics-scraper-6ffb444bf9-nw57v   kubernetes-dashboard
	2e99de6785ee2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago       Running             storage-provisioner         2                   ab6a56c3f8d0b       storage-provisioner                          kube-system
	e25231cdd4c4f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   363c7468abbed       kubernetes-dashboard-855c9754f9-9jzrx        kubernetes-dashboard
	8de2557fb7f6a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago       Exited              storage-provisioner         1                   ab6a56c3f8d0b       storage-provisioner                          kube-system
	0a48ed5b3fbf4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago       Running             coredns                     1                   6551171810c82       coredns-66bc5c9577-n47rb                     kube-system
	37a4e6d8f9f06       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   2b38b14b39440       busybox                                      default
	b37b9a0b7fd29       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   bfa345ea2462a       kindnet-mv8z6                                kube-system
	f94e9eeb4727c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago       Running             kube-proxy                  1                   985b29e1850bd       kube-proxy-7lq28                             kube-system
	31cbedff1fe1e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   54411b658b94f       etcd-embed-certs-080134                      kube-system
	7a11f9726f5e6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   8641ea91c7eda       kube-scheduler-embed-certs-080134            kube-system
	a71c5b7dea391       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   1837dd5a98103       kube-controller-manager-embed-certs-080134   kube-system
	abb55666df5bd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   3fa8218d8a2cb       kube-apiserver-embed-certs-080134            kube-system
	
	
	==> coredns [0a48ed5b3fbf47202add398ac63a0933859619e11b2d3a7a92ec0f84fd39b13d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53119 - 29270 "HINFO IN 7210804044794814178.1397832052451397412. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.063801773s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-080134
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-080134
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=embed-certs-080134
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_17_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:17:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-080134
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:19:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:19:15 +0000   Thu, 02 Oct 2025 22:16:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:19:15 +0000   Thu, 02 Oct 2025 22:16:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:19:15 +0000   Thu, 02 Oct 2025 22:16:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:19:15 +0000   Thu, 02 Oct 2025 22:17:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-080134
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 1768e21c00254ff9a86ef445008105e3
	  System UUID:                de46e61e-5f26-496d-bb31-d89253767b5d
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-n47rb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-embed-certs-080134                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-mv8z6                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-embed-certs-080134             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-embed-certs-080134    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-7lq28                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-080134             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nw57v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9jzrx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 49s                    kube-proxy       
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node embed-certs-080134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node embed-certs-080134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s (x8 over 2m40s)  kubelet          Node embed-certs-080134 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node embed-certs-080134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node embed-certs-080134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node embed-certs-080134 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m22s                  node-controller  Node embed-certs-080134 event: Registered Node embed-certs-080134 in Controller
	  Normal   NodeReady                99s                    kubelet          Node embed-certs-080134 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node embed-certs-080134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node embed-certs-080134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node embed-certs-080134 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node embed-certs-080134 event: Registered Node embed-certs-080134 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:18] overlayfs: idmapped layers are currently not supported
	[ +12.782696] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [31cbedff1fe1e823d96eb4858d03eb2aa72e9a77c0ffc8651909298dfb2f2c47] <==
	{"level":"warn","ts":"2025-10-02T22:18:41.787414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:41.826152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:41.885152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:41.946299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:41.990692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.042982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.095747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.134121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.153515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.189035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.210473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.234260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.278938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.312501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.334256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.382173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.409463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.440282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.481653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.511287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.544783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.600648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.625672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.680952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.741586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59434","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:19:37 up  7:01,  0 user,  load average: 5.62, 3.59, 2.57
	Linux embed-certs-080134 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b37b9a0b7fd291813508411cfc8272652f0c2752c32e03b610e896ac45ffcb46] <==
	I1002 22:18:45.915080       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:18:45.915299       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 22:18:45.915436       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:18:45.915448       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:18:45.915458       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:18:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:18:46.160292       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:18:46.160311       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:18:46.160319       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:18:46.160587       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:19:16.160393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 22:19:16.160380       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:19:16.160542       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:19:16.161764       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 22:19:17.761184       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:19:17.761224       1 metrics.go:72] Registering metrics
	I1002 22:19:17.761301       1 controller.go:711] "Syncing nftables rules"
	I1002 22:19:26.162200       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:19:26.162245       1 main.go:301] handling current node
	I1002 22:19:36.163493       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:19:36.163542       1 main.go:301] handling current node
	
	
	==> kube-apiserver [abb55666df5bda820c6e6516b076744eb1e7ce0ae95bd5d8e416c3dca835aa10] <==
	I1002 22:18:44.479708       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:18:44.479727       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:18:44.485445       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 22:18:44.487859       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:18:44.488897       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 22:18:44.488995       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 22:18:44.489249       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 22:18:44.490052       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:18:44.490064       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:18:44.490660       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 22:18:44.490696       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:18:44.526979       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 22:18:44.528648       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1002 22:18:44.592824       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 22:18:44.822464       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:18:45.107334       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:18:46.889856       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 22:18:47.127598       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:18:47.233796       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:18:47.348366       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:18:47.613265       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.23.136"}
	I1002 22:18:47.642338       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.204.160"}
	I1002 22:18:49.567400       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 22:18:49.606894       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 22:18:49.904978       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a71c5b7dea391f98f7c0a495ab685f227e17c98ff9866d2a9493764e53e86da5] <==
	I1002 22:18:49.563651       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:18:49.566236       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 22:18:49.566464       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 22:18:49.568646       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 22:18:49.569001       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 22:18:49.570015       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 22:18:49.571347       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:18:49.572588       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:18:49.579868       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:18:49.596646       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 22:18:49.596838       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 22:18:49.596913       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:18:49.597231       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 22:18:49.597347       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 22:18:49.598560       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 22:18:49.604133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 22:18:49.604316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 22:18:49.604431       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 22:18:49.604547       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 22:18:49.609208       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:18:49.609310       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 22:18:49.611444       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 22:18:49.634391       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:18:49.634493       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:18:49.634526       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [f94e9eeb4727cf987f9a8b1a30b32e826140c814cd7eaca8aa5c10744f968eaa] <==
	I1002 22:18:47.514294       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:18:47.687145       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:18:47.787247       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:18:47.807794       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 22:18:47.807883       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:18:47.899961       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:18:47.900080       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:18:47.905691       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:18:47.906158       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:18:47.906372       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:18:47.907600       1 config.go:200] "Starting service config controller"
	I1002 22:18:47.907655       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:18:47.907697       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:18:47.907739       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:18:47.907781       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:18:47.907812       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:18:47.908458       1 config.go:309] "Starting node config controller"
	I1002 22:18:47.910952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:18:47.911012       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:18:48.008144       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:18:48.008244       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:18:48.008264       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7a11f9726f5e6292b466176dd31607960e89c89de96d69efda2e47f6d6bf355d] <==
	I1002 22:18:41.442660       1 serving.go:386] Generated self-signed cert in-memory
	I1002 22:18:47.217765       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 22:18:47.239456       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:18:47.277349       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 22:18:47.277389       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 22:18:47.277436       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:18:47.277444       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:18:47.277458       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:18:47.277464       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:18:47.280569       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:18:47.280608       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 22:18:47.390899       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:18:47.391047       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 22:18:47.391226       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:18:50 embed-certs-080134 kubelet[780]: I1002 22:18:50.131152     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0467aada-bdde-41bf-96cd-f172f8e2e4c3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-nw57v\" (UID: \"0467aada-bdde-41bf-96cd-f172f8e2e4c3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v"
	Oct 02 22:18:50 embed-certs-080134 kubelet[780]: I1002 22:18:50.131209     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmg47\" (UniqueName: \"kubernetes.io/projected/0467aada-bdde-41bf-96cd-f172f8e2e4c3-kube-api-access-nmg47\") pod \"dashboard-metrics-scraper-6ffb444bf9-nw57v\" (UID: \"0467aada-bdde-41bf-96cd-f172f8e2e4c3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v"
	Oct 02 22:18:50 embed-certs-080134 kubelet[780]: I1002 22:18:50.131231     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aef97e4d-9f32-404b-9bac-6f18e92b149a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9jzrx\" (UID: \"aef97e4d-9f32-404b-9bac-6f18e92b149a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9jzrx"
	Oct 02 22:18:50 embed-certs-080134 kubelet[780]: I1002 22:18:50.131250     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sswfv\" (UniqueName: \"kubernetes.io/projected/aef97e4d-9f32-404b-9bac-6f18e92b149a-kube-api-access-sswfv\") pod \"kubernetes-dashboard-855c9754f9-9jzrx\" (UID: \"aef97e4d-9f32-404b-9bac-6f18e92b149a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9jzrx"
	Oct 02 22:18:50 embed-certs-080134 kubelet[780]: I1002 22:18:50.783908     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 22:18:51 embed-certs-080134 kubelet[780]: W1002 22:18:51.327562     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/crio-363c7468abbed6095f934e805900472871026ff946b0e37d746a4f9925f34944 WatchSource:0}: Error finding container 363c7468abbed6095f934e805900472871026ff946b0e37d746a4f9925f34944: Status 404 returned error can't find the container with id 363c7468abbed6095f934e805900472871026ff946b0e37d746a4f9925f34944
	Oct 02 22:19:00 embed-certs-080134 kubelet[780]: I1002 22:19:00.254622     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9jzrx" podStartSLOduration=2.197927269 podStartE2EDuration="10.251241705s" podCreationTimestamp="2025-10-02 22:18:50 +0000 UTC" firstStartedPulling="2025-10-02 22:18:51.334272468 +0000 UTC m=+15.827193275" lastFinishedPulling="2025-10-02 22:18:59.387586905 +0000 UTC m=+23.880507711" observedRunningTime="2025-10-02 22:19:00.238726312 +0000 UTC m=+24.731647118" watchObservedRunningTime="2025-10-02 22:19:00.251241705 +0000 UTC m=+24.744162512"
	Oct 02 22:19:06 embed-certs-080134 kubelet[780]: I1002 22:19:06.198524     780 scope.go:117] "RemoveContainer" containerID="50dbec95cce22e8957360e45d8ae0da5601aac24db5524b604063a4262efa931"
	Oct 02 22:19:07 embed-certs-080134 kubelet[780]: I1002 22:19:07.203260     780 scope.go:117] "RemoveContainer" containerID="32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3"
	Oct 02 22:19:07 embed-certs-080134 kubelet[780]: E1002 22:19:07.203933     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nw57v_kubernetes-dashboard(0467aada-bdde-41bf-96cd-f172f8e2e4c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v" podUID="0467aada-bdde-41bf-96cd-f172f8e2e4c3"
	Oct 02 22:19:07 embed-certs-080134 kubelet[780]: I1002 22:19:07.204068     780 scope.go:117] "RemoveContainer" containerID="50dbec95cce22e8957360e45d8ae0da5601aac24db5524b604063a4262efa931"
	Oct 02 22:19:08 embed-certs-080134 kubelet[780]: I1002 22:19:08.209117     780 scope.go:117] "RemoveContainer" containerID="32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3"
	Oct 02 22:19:08 embed-certs-080134 kubelet[780]: E1002 22:19:08.210531     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nw57v_kubernetes-dashboard(0467aada-bdde-41bf-96cd-f172f8e2e4c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v" podUID="0467aada-bdde-41bf-96cd-f172f8e2e4c3"
	Oct 02 22:19:11 embed-certs-080134 kubelet[780]: I1002 22:19:11.284143     780 scope.go:117] "RemoveContainer" containerID="32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3"
	Oct 02 22:19:11 embed-certs-080134 kubelet[780]: E1002 22:19:11.284332     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nw57v_kubernetes-dashboard(0467aada-bdde-41bf-96cd-f172f8e2e4c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v" podUID="0467aada-bdde-41bf-96cd-f172f8e2e4c3"
	Oct 02 22:19:18 embed-certs-080134 kubelet[780]: I1002 22:19:18.231649     780 scope.go:117] "RemoveContainer" containerID="8de2557fb7f6a4398634b1d07698a2116674409032a382a4eee752ca093ec156"
	Oct 02 22:19:21 embed-certs-080134 kubelet[780]: I1002 22:19:21.769636     780 scope.go:117] "RemoveContainer" containerID="32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3"
	Oct 02 22:19:22 embed-certs-080134 kubelet[780]: I1002 22:19:22.244697     780 scope.go:117] "RemoveContainer" containerID="32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3"
	Oct 02 22:19:22 embed-certs-080134 kubelet[780]: I1002 22:19:22.245033     780 scope.go:117] "RemoveContainer" containerID="e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86"
	Oct 02 22:19:22 embed-certs-080134 kubelet[780]: E1002 22:19:22.245222     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nw57v_kubernetes-dashboard(0467aada-bdde-41bf-96cd-f172f8e2e4c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v" podUID="0467aada-bdde-41bf-96cd-f172f8e2e4c3"
	Oct 02 22:19:31 embed-certs-080134 kubelet[780]: I1002 22:19:31.284323     780 scope.go:117] "RemoveContainer" containerID="e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86"
	Oct 02 22:19:31 embed-certs-080134 kubelet[780]: E1002 22:19:31.284535     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nw57v_kubernetes-dashboard(0467aada-bdde-41bf-96cd-f172f8e2e4c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v" podUID="0467aada-bdde-41bf-96cd-f172f8e2e4c3"
	Oct 02 22:19:34 embed-certs-080134 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:19:34 embed-certs-080134 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:19:34 embed-certs-080134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e25231cdd4c4fd0dd69d5de90f20d75c497d5d57ba39068e59d1bd6e70ac3e8e] <==
	2025/10/02 22:18:59 Starting overwatch
	2025/10/02 22:18:59 Using namespace: kubernetes-dashboard
	2025/10/02 22:18:59 Using in-cluster config to connect to apiserver
	2025/10/02 22:18:59 Using secret token for csrf signing
	2025/10/02 22:18:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 22:18:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 22:18:59 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 22:18:59 Generating JWE encryption key
	2025/10/02 22:18:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 22:18:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 22:19:01 Initializing JWE encryption key from synchronized object
	2025/10/02 22:19:01 Creating in-cluster Sidecar client
	2025/10/02 22:19:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:19:01 Serving insecurely on HTTP port: 9090
	2025/10/02 22:19:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2e99de6785ee276082cf2ab23e0c125a5ecf20685dbddf201dd67fbad2b0bae0] <==
	I1002 22:19:18.289233       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 22:19:18.302236       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:19:18.302296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 22:19:18.305794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:21.762228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:26.022719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:29.621343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:32.675951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:35.698411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:35.705506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:19:35.705772       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:19:35.706204       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5c4053a-7381-4524-a450-046d4cf76d2d", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-080134_9f226230-6dd4-4dc5-a707-fa0b516135d2 became leader
	I1002 22:19:35.706475       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-080134_9f226230-6dd4-4dc5-a707-fa0b516135d2!
	W1002 22:19:35.721717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:35.738394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:19:35.806959       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-080134_9f226230-6dd4-4dc5-a707-fa0b516135d2!
	W1002 22:19:37.742121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:37.747949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8de2557fb7f6a4398634b1d07698a2116674409032a382a4eee752ca093ec156] <==
	I1002 22:18:47.298992       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 22:19:17.394260       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-080134 -n embed-certs-080134
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-080134 -n embed-certs-080134: exit status 2 (359.58038ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-080134 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-080134
helpers_test.go:243: (dbg) docker inspect embed-certs-080134:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e",
	        "Created": "2025-10-02T22:16:37.741033428Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1471538,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:18:27.611831299Z",
	            "FinishedAt": "2025-10-02T22:18:26.035152734Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/hosts",
	        "LogPath": "/var/lib/docker/containers/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e-json.log",
	        "Name": "/embed-certs-080134",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-080134:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-080134",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e",
	                "LowerDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51acf070cd9c7f7600fe5195522ba38ddeff0184256eb4e7d22db01db02a4860/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-080134",
	                "Source": "/var/lib/docker/volumes/embed-certs-080134/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-080134",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-080134",
	                "name.minikube.sigs.k8s.io": "embed-certs-080134",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e536d900c8d925da20d8c30ee4bd80b79cf90c7ffa0f4b18df861553e8c7dc8a",
	            "SandboxKey": "/var/run/docker/netns/e536d900c8d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34581"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34582"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34585"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34583"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34584"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-080134": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:43:f9:48:f5:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8a64f7585ec4aa24b8094a59cd780b3d89a1239c63c189f2097d1ca2a382a6ac",
	                    "EndpointID": "8bdd9c0e919f025ea03f667ba12d2b1f561b0193670a6dd17ff34c73118556d8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-080134",
	                        "d75a770c7fe5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-080134 -n embed-certs-080134
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-080134 -n embed-certs-080134: exit status 2 (329.347727ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-080134 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-080134 logs -n 25: (1.30008031s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-173127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:15 UTC │
	│ start   │ -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:15 UTC │ 02 Oct 25 22:16 UTC │
	│ image   │ old-k8s-version-173127 image list --format=json                                                                                                                                                                                               │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ pause   │ -p old-k8s-version-173127 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ delete  │ -p old-k8s-version-173127                                                                                                                                                                                                                     │ old-k8s-version-173127       │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-230628 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-230628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-230628 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ pause   │ -p default-k8s-diff-port-230628 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-080134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ delete  │ -p disable-driver-mounts-607037                                                                                                                                                                                                               │ disable-driver-mounts-607037 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ stop    │ -p embed-certs-080134 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable dashboard -p embed-certs-080134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ image   │ embed-certs-080134 image list --format=json                                                                                                                                                                                                   │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ pause   │ -p embed-certs-080134 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ stop    │ -p no-preload-975002 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:18:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:18:27.241377 1471394 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:18:27.241590 1471394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:18:27.241616 1471394 out.go:374] Setting ErrFile to fd 2...
	I1002 22:18:27.241635 1471394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:18:27.241916 1471394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:18:27.242344 1471394 out.go:368] Setting JSON to false
	I1002 22:18:27.243293 1471394 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25233,"bootTime":1759418275,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:18:27.243384 1471394 start.go:140] virtualization:  
	I1002 22:18:27.248909 1471394 out.go:179] * [embed-certs-080134] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:18:27.252367 1471394 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:18:27.252410 1471394 notify.go:220] Checking for updates...
	I1002 22:18:27.259480 1471394 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:18:27.262681 1471394 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:18:27.265720 1471394 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:18:27.268662 1471394 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:18:27.271638 1471394 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:18:27.275118 1471394 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:18:27.275745 1471394 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:18:27.313492 1471394 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:18:27.313615 1471394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:18:27.410660 1471394 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-02 22:18:27.397108084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:18:27.410768 1471394 docker.go:318] overlay module found
	I1002 22:18:27.413943 1471394 out.go:179] * Using the docker driver based on existing profile
	I1002 22:18:27.416868 1471394 start.go:304] selected driver: docker
	I1002 22:18:27.416888 1471394 start.go:924] validating driver "docker" against &{Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:27.416986 1471394 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:18:27.417693 1471394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:18:27.511305 1471394 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-02 22:18:27.50234397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:18:27.511649 1471394 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:18:27.511690 1471394 cni.go:84] Creating CNI manager for ""
	I1002 22:18:27.511762 1471394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:18:27.511806 1471394 start.go:348] cluster config:
	{Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:27.516554 1471394 out.go:179] * Starting "embed-certs-080134" primary control-plane node in "embed-certs-080134" cluster
	I1002 22:18:27.519781 1471394 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:18:27.522245 1471394 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:18:27.525376 1471394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:18:27.525450 1471394 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:18:27.525463 1471394 cache.go:58] Caching tarball of preloaded images
	I1002 22:18:27.525477 1471394 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:18:27.525612 1471394 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:18:27.525622 1471394 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:18:27.525733 1471394 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/config.json ...
	I1002 22:18:27.550567 1471394 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:18:27.550596 1471394 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:18:27.550618 1471394 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:18:27.550642 1471394 start.go:360] acquireMachinesLock for embed-certs-080134: {Name:mkb3c88b79da323c6aaa02ac6130cdaf0d74178c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:18:27.550700 1471394 start.go:364] duration metric: took 35.61µs to acquireMachinesLock for "embed-certs-080134"
	I1002 22:18:27.550727 1471394 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:18:27.550742 1471394 fix.go:54] fixHost starting: 
	I1002 22:18:27.551007 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:27.571312 1471394 fix.go:112] recreateIfNeeded on embed-certs-080134: state=Stopped err=<nil>
	W1002 22:18:27.571343 1471394 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:18:23.619484 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1002 22:18:23.646585 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1002 22:18:23.646707 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1002 22:18:23.698774 1469015 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1002 22:18:23.699086 1469015 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:25.389499 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.74273701s)
	I1002 22:18:25.389529 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1002 22:18:25.389567 1469015 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1002 22:18:25.389646 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1002 22:18:25.389734 1469015 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.690609487s)
	I1002 22:18:25.389787 1469015 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1002 22:18:25.389818 1469015 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:25.389864 1469015 ssh_runner.go:195] Run: which crictl
	I1002 22:18:27.620887 1469015 ssh_runner.go:235] Completed: which crictl: (2.230998103s)
	I1002 22:18:27.620969 1469015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:27.621111 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.231448285s)
	I1002 22:18:27.621127 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1002 22:18:27.621143 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 22:18:27.621168 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1002 22:18:27.575412 1471394 out.go:252] * Restarting existing docker container for "embed-certs-080134" ...
	I1002 22:18:27.575540 1471394 cli_runner.go:164] Run: docker start embed-certs-080134
	I1002 22:18:27.864000 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:27.887726 1471394 kic.go:430] container "embed-certs-080134" state is running.
	I1002 22:18:27.888104 1471394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-080134
	I1002 22:18:27.917730 1471394 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/config.json ...
	I1002 22:18:27.917985 1471394 machine.go:93] provisionDockerMachine start ...
	I1002 22:18:27.918062 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:27.968040 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:27.968363 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:27.968372 1471394 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:18:27.971583 1471394 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:18:31.121910 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-080134
	
	I1002 22:18:31.121934 1471394 ubuntu.go:182] provisioning hostname "embed-certs-080134"
	I1002 22:18:31.121996 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:31.144299 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:31.144606 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:31.144618 1471394 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-080134 && echo "embed-certs-080134" | sudo tee /etc/hostname
	I1002 22:18:31.312251 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-080134
	
	I1002 22:18:31.312326 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:31.335751 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:31.336056 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:31.336080 1471394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-080134' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-080134/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-080134' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:18:31.486452 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:18:31.486527 1471394 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:18:31.486563 1471394 ubuntu.go:190] setting up certificates
	I1002 22:18:31.486604 1471394 provision.go:84] configureAuth start
	I1002 22:18:31.486710 1471394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-080134
	I1002 22:18:31.511138 1471394 provision.go:143] copyHostCerts
	I1002 22:18:31.511211 1471394 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:18:31.511228 1471394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:18:31.511300 1471394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:18:31.511399 1471394 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:18:31.511404 1471394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:18:31.511430 1471394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:18:31.511493 1471394 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:18:31.511498 1471394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:18:31.511522 1471394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:18:31.511575 1471394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.embed-certs-080134 san=[127.0.0.1 192.168.85.2 embed-certs-080134 localhost minikube]
	I1002 22:18:31.893293 1471394 provision.go:177] copyRemoteCerts
	I1002 22:18:31.893359 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:18:31.893409 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:31.911758 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.014147 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:18:32.050758 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:18:32.080185 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 22:18:32.105189 1471394 provision.go:87] duration metric: took 618.544299ms to configureAuth
	I1002 22:18:32.105275 1471394 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:18:32.105519 1471394 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:18:32.105705 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.124928 1471394 main.go:141] libmachine: Using SSH client type: native
	I1002 22:18:32.125251 1471394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34581 <nil> <nil>}
	I1002 22:18:32.125267 1471394 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:18:29.233061 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.611867781s)
	I1002 22:18:29.233089 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1002 22:18:29.233107 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 22:18:29.233152 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1002 22:18:29.233217 1469015 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.612233888s)
	I1002 22:18:29.233255 1469015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:30.533897 1469015 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.300616469s)
	I1002 22:18:30.533970 1469015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:30.534151 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.300983641s)
	I1002 22:18:30.534167 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1002 22:18:30.534184 1469015 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 22:18:30.534215 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1002 22:18:32.211052 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.676812997s)
	I1002 22:18:32.211081 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1002 22:18:32.211098 1469015 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1002 22:18:32.211148 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1002 22:18:32.211195 1469015 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.677204678s)
	I1002 22:18:32.211239 1469015 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 22:18:32.211326 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1002 22:18:32.472964 1471394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:18:32.472994 1471394 machine.go:96] duration metric: took 4.5549988s to provisionDockerMachine
	I1002 22:18:32.473005 1471394 start.go:293] postStartSetup for "embed-certs-080134" (driver="docker")
	I1002 22:18:32.473016 1471394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:18:32.473075 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:18:32.473112 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.504127 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.607008 1471394 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:18:32.610932 1471394 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:18:32.610957 1471394 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:18:32.610967 1471394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:18:32.611017 1471394 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:18:32.611092 1471394 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:18:32.611198 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:18:32.619379 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:18:32.638857 1471394 start.go:296] duration metric: took 165.837176ms for postStartSetup
	I1002 22:18:32.639023 1471394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:18:32.639097 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.661986 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.756088 1471394 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:18:32.763175 1471394 fix.go:56] duration metric: took 5.212432494s for fixHost
	I1002 22:18:32.763197 1471394 start.go:83] releasing machines lock for "embed-certs-080134", held for 5.212483004s
	I1002 22:18:32.763275 1471394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-080134
	I1002 22:18:32.789872 1471394 ssh_runner.go:195] Run: cat /version.json
	I1002 22:18:32.789929 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.790254 1471394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:18:32.790302 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:32.828586 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:32.836099 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:33.043006 1471394 ssh_runner.go:195] Run: systemctl --version
	I1002 22:18:33.050763 1471394 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:18:33.108072 1471394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:18:33.113998 1471394 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:18:33.114128 1471394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:18:33.124321 1471394 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:18:33.124388 1471394 start.go:495] detecting cgroup driver to use...
	I1002 22:18:33.124436 1471394 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:18:33.124526 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:18:33.142258 1471394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:18:33.157706 1471394 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:18:33.157778 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:18:33.175228 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:18:33.190313 1471394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:18:33.346607 1471394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:18:33.532428 1471394 docker.go:234] disabling docker service ...
	I1002 22:18:33.532502 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:18:33.563245 1471394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:18:33.583139 1471394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:18:33.750538 1471394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:18:33.909095 1471394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:18:33.926180 1471394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:18:33.956759 1471394 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:18:33.956833 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:33.974951 1471394 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:18:33.975030 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:33.988735 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:33.998132 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.011278 1471394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:18:34.023270 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.038545 1471394 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.056908 1471394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:18:34.071629 1471394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:18:34.080896 1471394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:18:34.089541 1471394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:34.286970 1471394 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:18:34.854866 1471394 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:18:34.854996 1471394 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:18:34.860113 1471394 start.go:563] Will wait 60s for crictl version
	I1002 22:18:34.860263 1471394 ssh_runner.go:195] Run: which crictl
	I1002 22:18:34.865256 1471394 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:18:34.903231 1471394 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:18:34.903362 1471394 ssh_runner.go:195] Run: crio --version
	I1002 22:18:34.963890 1471394 ssh_runner.go:195] Run: crio --version
	I1002 22:18:35.003885 1471394 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:18:35.007109 1471394 cli_runner.go:164] Run: docker network inspect embed-certs-080134 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:18:35.031834 1471394 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 22:18:35.036431 1471394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:18:35.049506 1471394 kubeadm.go:883] updating cluster {Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:18:35.049618 1471394 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:18:35.049727 1471394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:18:35.097850 1471394 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:18:35.097880 1471394 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:18:35.097944 1471394 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:18:35.133423 1471394 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:18:35.133449 1471394 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:18:35.133457 1471394 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 22:18:35.133555 1471394 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-080134 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:18:35.133645 1471394 ssh_runner.go:195] Run: crio config
	I1002 22:18:35.204040 1471394 cni.go:84] Creating CNI manager for ""
	I1002 22:18:35.204063 1471394 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:18:35.204081 1471394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:18:35.204128 1471394 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-080134 NodeName:embed-certs-080134 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:18:35.204371 1471394 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-080134"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:18:35.204464 1471394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:18:35.213713 1471394 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:18:35.213818 1471394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:18:35.222933 1471394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1002 22:18:35.247563 1471394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:18:35.270076 1471394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1002 22:18:35.285413 1471394 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:18:35.289522 1471394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:18:35.312724 1471394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:35.467847 1471394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:18:35.484391 1471394 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134 for IP: 192.168.85.2
	I1002 22:18:35.484464 1471394 certs.go:195] generating shared ca certs ...
	I1002 22:18:35.484494 1471394 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:35.484661 1471394 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:18:35.484747 1471394 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:18:35.484772 1471394 certs.go:257] generating profile certs ...
	I1002 22:18:35.484898 1471394 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/client.key
	I1002 22:18:35.485001 1471394 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/apiserver.key.248cab64
	I1002 22:18:35.485075 1471394 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/proxy-client.key
	I1002 22:18:35.485215 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:18:35.485273 1471394 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:18:35.485298 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:18:35.485348 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:18:35.485397 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:18:35.485447 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:18:35.485514 1471394 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:18:35.486237 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:18:35.506450 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:18:35.567521 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:18:35.627879 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:18:35.730475 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1002 22:18:35.779333 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:18:35.819379 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:18:35.844088 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/embed-certs-080134/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 22:18:35.873678 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:18:35.894361 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:18:35.912786 1471394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:18:35.930978 1471394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:18:35.944370 1471394 ssh_runner.go:195] Run: openssl version
	I1002 22:18:35.951356 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:18:35.959750 1471394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:35.964120 1471394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:35.964262 1471394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:36.007111 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:18:36.016761 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:18:36.026506 1471394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:18:36.031576 1471394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:18:36.031699 1471394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:18:36.075194 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:18:36.086066 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:18:36.095588 1471394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:18:36.100529 1471394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:18:36.100692 1471394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:18:36.144821 1471394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:18:36.153776 1471394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:18:36.158258 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:18:36.245549 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:18:36.330215 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:18:36.420875 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:18:36.568928 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:18:36.711006 1471394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:18:36.796194 1471394 kubeadm.go:400] StartCluster: {Name:embed-certs-080134 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-080134 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:36.796345 1471394 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:18:36.796474 1471394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:18:36.892149 1471394 cri.go:89] found id: "31cbedff1fe1e823d96eb4858d03eb2aa72e9a77c0ffc8651909298dfb2f2c47"
	I1002 22:18:36.892223 1471394 cri.go:89] found id: "7a11f9726f5e6292b466176dd31607960e89c89de96d69efda2e47f6d6bf355d"
	I1002 22:18:36.892241 1471394 cri.go:89] found id: "a71c5b7dea391f98f7c0a495ab685f227e17c98ff9866d2a9493764e53e86da5"
	I1002 22:18:36.892267 1471394 cri.go:89] found id: "abb55666df5bda820c6e6516b076744eb1e7ce0ae95bd5d8e416c3dca835aa10"
	I1002 22:18:36.892300 1471394 cri.go:89] found id: ""
	I1002 22:18:36.892388 1471394 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:18:36.915069 1471394 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:18:36Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:18:36.915204 1471394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:18:36.932678 1471394 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:18:36.932756 1471394 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:18:36.932847 1471394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:18:36.947880 1471394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:18:36.948438 1471394 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-080134" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:18:36.948600 1471394 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-080134" cluster setting kubeconfig missing "embed-certs-080134" context setting]
	I1002 22:18:36.948954 1471394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:36.950584 1471394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:18:36.961307 1471394 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 22:18:36.961392 1471394 kubeadm.go:601] duration metric: took 28.610218ms to restartPrimaryControlPlane
	I1002 22:18:36.961415 1471394 kubeadm.go:402] duration metric: took 165.231557ms to StartCluster
	I1002 22:18:36.961458 1471394 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:36.961553 1471394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:18:36.962655 1471394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:36.962911 1471394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:18:36.963445 1471394 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:18:36.963528 1471394 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-080134"
	I1002 22:18:36.963542 1471394 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-080134"
	W1002 22:18:36.963547 1471394 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:18:36.963571 1471394 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:18:36.964072 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:36.964329 1471394 config.go:182] Loaded profile config "embed-certs-080134": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:18:36.964414 1471394 addons.go:69] Setting dashboard=true in profile "embed-certs-080134"
	I1002 22:18:36.964442 1471394 addons.go:238] Setting addon dashboard=true in "embed-certs-080134"
	W1002 22:18:36.964463 1471394 addons.go:247] addon dashboard should already be in state true
	I1002 22:18:36.964512 1471394 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:18:36.964975 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:36.965467 1471394 addons.go:69] Setting default-storageclass=true in profile "embed-certs-080134"
	I1002 22:18:36.965492 1471394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-080134"
	I1002 22:18:36.965772 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:36.969471 1471394 out.go:179] * Verifying Kubernetes components...
	I1002 22:18:36.974201 1471394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:37.015067 1471394 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:18:37.021148 1471394 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:18:37.021174 1471394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:18:37.021254 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:37.024388 1471394 addons.go:238] Setting addon default-storageclass=true in "embed-certs-080134"
	W1002 22:18:37.024435 1471394 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:18:37.024487 1471394 host.go:66] Checking if "embed-certs-080134" exists ...
	I1002 22:18:37.025002 1471394 cli_runner.go:164] Run: docker container inspect embed-certs-080134 --format={{.State.Status}}
	I1002 22:18:37.054125 1471394 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:18:37.059462 1471394 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 22:18:37.067041 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:18:37.067076 1471394 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:18:37.067158 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:37.075265 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:37.098924 1471394 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:18:37.098943 1471394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:18:37.099010 1471394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-080134
	I1002 22:18:37.117464 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:37.143671 1471394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34581 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/embed-certs-080134/id_rsa Username:docker}
	I1002 22:18:37.089260 1469015 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (4.877913042s)
	I1002 22:18:37.089296 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1002 22:18:37.089321 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1002 22:18:37.089439 1469015 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.878271526s)
	I1002 22:18:37.089449 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1002 22:18:37.236522 1469015 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 22:18:37.236596 1469015 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1002 22:18:38.212691 1469015 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 22:18:38.212727 1469015 cache_images.go:124] Successfully loaded all cached images
	I1002 22:18:38.212734 1469015 cache_images.go:93] duration metric: took 15.924038027s to LoadCachedImages
	I1002 22:18:38.212745 1469015 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 22:18:38.212836 1469015 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-975002 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:18:38.212921 1469015 ssh_runner.go:195] Run: crio config
	I1002 22:18:38.318444 1469015 cni.go:84] Creating CNI manager for ""
	I1002 22:18:38.318513 1469015 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:18:38.318545 1469015 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:18:38.318596 1469015 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-975002 NodeName:no-preload-975002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:18:38.318772 1469015 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-975002"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:18:38.318873 1469015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:18:38.331027 1469015 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1002 22:18:38.331144 1469015 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1002 22:18:38.348565 1469015 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1002 22:18:38.348774 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1002 22:18:38.349315 1469015 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1002 22:18:38.349739 1469015 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1002 22:18:38.355088 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1002 22:18:38.355120 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1002 22:18:37.478882 1471394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:18:37.511961 1471394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:18:37.569298 1471394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:18:37.600558 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:18:37.600586 1471394 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:18:37.746513 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:18:37.746540 1471394 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:18:37.856796 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:18:37.856820 1471394 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:18:37.949027 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:18:37.949050 1471394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:18:38.030540 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:18:38.030566 1471394 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:18:38.065948 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:18:38.065977 1471394 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:18:38.117556 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:18:38.117636 1471394 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:18:38.153468 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:18:38.153488 1471394 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:18:38.174710 1471394 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:18:38.174731 1471394 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:18:38.194835 1471394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:18:39.419779 1469015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:18:39.450053 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1002 22:18:39.460331 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1002 22:18:39.460370 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1002 22:18:39.645411 1469015 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1002 22:18:39.675855 1469015 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1002 22:18:39.675899 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1002 22:18:40.325202 1469015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:18:40.335086 1469015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 22:18:40.357734 1469015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:18:40.376320 1469015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 22:18:40.402686 1469015 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:18:40.407035 1469015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:18:40.426631 1469015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:18:40.633489 1469015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:18:40.673001 1469015 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002 for IP: 192.168.76.2
	I1002 22:18:40.673026 1469015 certs.go:195] generating shared ca certs ...
	I1002 22:18:40.673042 1469015 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:40.673183 1469015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:18:40.673227 1469015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:18:40.673237 1469015 certs.go:257] generating profile certs ...
	I1002 22:18:40.673295 1469015 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.key
	I1002 22:18:40.673312 1469015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt with IP's: []
	I1002 22:18:41.128375 1469015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt ...
	I1002 22:18:41.128406 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: {Name:mkfb502c73b4ad79c2095821374cc38c54249654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.128600 1469015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.key ...
	I1002 22:18:41.128616 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.key: {Name:mkd38cc7fd83e5057b4c9d7fd2e30313c24ba9cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.128721 1469015 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57
	I1002 22:18:41.128741 1469015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1002 22:18:41.508807 1469015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57 ...
	I1002 22:18:41.508837 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57: {Name:mkdebba222698e9ad33dbf8d5a6cf31ef95e43dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.509040 1469015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57 ...
	I1002 22:18:41.509056 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57: {Name:mk918c9e23e54fec10949ffff53c7a04638071be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.509152 1469015 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt.ca172d57 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt
	I1002 22:18:41.509231 1469015 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key
	I1002 22:18:41.509292 1469015 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key
	I1002 22:18:41.509314 1469015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt with IP's: []
	I1002 22:18:41.602000 1469015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt ...
	I1002 22:18:41.602055 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt: {Name:mk609465dd816034a4031d70c3a4ad97b9295f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.602227 1469015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key ...
	I1002 22:18:41.602243 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key: {Name:mk90272abbfdd0f5e7ed179e6e268a568e1c3a2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:18:41.602425 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:18:41.602469 1469015 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:18:41.602480 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:18:41.602507 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:18:41.602530 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:18:41.602553 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:18:41.602595 1469015 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:18:41.603153 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:18:41.622994 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:18:41.668729 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:18:41.733335 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:18:41.770774 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 22:18:41.811741 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:18:41.833957 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:18:41.866381 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:18:41.897084 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:18:41.933013 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:18:41.975831 1469015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:18:42.007096 1469015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:18:42.040238 1469015 ssh_runner.go:195] Run: openssl version
	I1002 22:18:42.047701 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:18:42.058795 1469015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:18:42.063942 1469015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:18:42.064034 1469015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:18:42.124036 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:18:42.136576 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:18:42.154191 1469015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:42.160428 1469015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:42.160611 1469015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:18:42.210341 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:18:42.229193 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:18:42.246105 1469015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:18:42.252813 1469015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:18:42.252965 1469015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:18:42.302268 1469015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:18:42.317825 1469015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:18:42.325788 1469015 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 22:18:42.325847 1469015 kubeadm.go:400] StartCluster: {Name:no-preload-975002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:18:42.325922 1469015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:18:42.325984 1469015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:18:42.371270 1469015 cri.go:89] found id: ""
	I1002 22:18:42.371352 1469015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:18:42.382662 1469015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:18:42.392183 1469015 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:18:42.392247 1469015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:18:42.411308 1469015 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:18:42.411327 1469015 kubeadm.go:157] found existing configuration files:
	
	I1002 22:18:42.411379 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:18:42.423455 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:18:42.423521 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:18:42.447291 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:18:42.462645 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:18:42.462764 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:18:42.482289 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:18:42.502444 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:18:42.502557 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:18:42.520099 1469015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:18:42.532069 1469015 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:18:42.532138 1469015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:18:42.541152 1469015 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:18:42.612796 1469015 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:18:42.613219 1469015 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:18:42.647545 1469015 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:18:42.647629 1469015 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:18:42.647681 1469015 kubeadm.go:318] OS: Linux
	I1002 22:18:42.647741 1469015 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:18:42.647801 1469015 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:18:42.647855 1469015 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:18:42.647910 1469015 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:18:42.647967 1469015 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:18:42.648022 1469015 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:18:42.648073 1469015 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:18:42.648128 1469015 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:18:42.648181 1469015 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:18:42.742836 1469015 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:18:42.742956 1469015 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:18:42.743057 1469015 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:18:42.766454 1469015 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:18:42.772327 1469015 out.go:252]   - Generating certificates and keys ...
	I1002 22:18:42.772429 1469015 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:18:42.772510 1469015 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:18:43.010659 1469015 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 22:18:47.782533 1471394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.303615936s)
	I1002 22:18:47.782597 1471394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.27061241s)
	I1002 22:18:47.782928 1471394 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.21360386s)
	I1002 22:18:47.782957 1471394 node_ready.go:35] waiting up to 6m0s for node "embed-certs-080134" to be "Ready" ...
	I1002 22:18:47.783211 1471394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.588348325s)
	I1002 22:18:47.786416 1471394 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-080134 addons enable metrics-server
	
	I1002 22:18:47.810901 1471394 node_ready.go:49] node "embed-certs-080134" is "Ready"
	I1002 22:18:47.810980 1471394 node_ready.go:38] duration metric: took 28.010163ms for node "embed-certs-080134" to be "Ready" ...
	I1002 22:18:47.811008 1471394 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:18:47.811097 1471394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:18:47.829126 1471394 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1002 22:18:44.741285 1469015 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 22:18:45.042415 1469015 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 22:18:45.580611 1469015 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 22:18:45.990281 1469015 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 22:18:45.991552 1469015 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-975002] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:18:46.331914 1469015 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 22:18:46.334626 1469015 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-975002] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1002 22:18:46.859365 1469015 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 22:18:46.950416 1469015 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 22:18:47.286760 1469015 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:18:47.287351 1469015 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:18:47.922882 1469015 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:18:48.297144 1469015 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:18:48.621705 1469015 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:18:48.663782 1469015 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:18:48.920879 1469015 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:18:48.921552 1469015 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:18:48.924242 1469015 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:18:47.830679 1471394 api_server.go:72] duration metric: took 10.867713333s to wait for apiserver process to appear ...
	I1002 22:18:47.830745 1471394 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:18:47.830784 1471394 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:18:47.832587 1471394 addons.go:514] duration metric: took 10.869132458s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1002 22:18:47.840925 1471394 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:18:47.842205 1471394 api_server.go:141] control plane version: v1.34.1
	I1002 22:18:47.842226 1471394 api_server.go:131] duration metric: took 11.462399ms to wait for apiserver health ...
	I1002 22:18:47.842235 1471394 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:18:47.846620 1471394 system_pods.go:59] 8 kube-system pods found
	I1002 22:18:47.846652 1471394 system_pods.go:61] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:18:47.846662 1471394 system_pods.go:61] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:18:47.846668 1471394 system_pods.go:61] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:18:47.846676 1471394 system_pods.go:61] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:18:47.846683 1471394 system_pods.go:61] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:18:47.846687 1471394 system_pods.go:61] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:18:47.846694 1471394 system_pods.go:61] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:18:47.846698 1471394 system_pods.go:61] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Running
	I1002 22:18:47.846703 1471394 system_pods.go:74] duration metric: took 4.462467ms to wait for pod list to return data ...
	I1002 22:18:47.846711 1471394 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:18:47.853249 1471394 default_sa.go:45] found service account: "default"
	I1002 22:18:47.853325 1471394 default_sa.go:55] duration metric: took 6.606684ms for default service account to be created ...
	I1002 22:18:47.853348 1471394 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:18:47.863524 1471394 system_pods.go:86] 8 kube-system pods found
	I1002 22:18:47.863562 1471394 system_pods.go:89] "coredns-66bc5c9577-n47rb" [1a9aa9ca-1be0-44d0-b70d-54990bd49fa9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:18:47.863572 1471394 system_pods.go:89] "etcd-embed-certs-080134" [66caeb25-2ac1-4390-a1e0-bb17cd7571d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:18:47.863578 1471394 system_pods.go:89] "kindnet-mv8z6" [11af225f-d46c-4749-ae66-b539ef3deafc] Running
	I1002 22:18:47.863586 1471394 system_pods.go:89] "kube-apiserver-embed-certs-080134" [574f234a-c18f-44c3-b4d5-3457f69be5b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:18:47.863592 1471394 system_pods.go:89] "kube-controller-manager-embed-certs-080134" [9ddbf8f0-9d33-4247-b377-bae8cf9b1b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:18:47.863600 1471394 system_pods.go:89] "kube-proxy-7lq28" [773ab73d-9ba2-45c8-8731-99bf5e77a39c] Running
	I1002 22:18:47.863607 1471394 system_pods.go:89] "kube-scheduler-embed-certs-080134" [f47cf601-b0c8-4659-b2a8-56d5605e5ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:18:47.863611 1471394 system_pods.go:89] "storage-provisioner" [3bfae264-fe18-4caf-a609-570bc75daf7d] Running
	I1002 22:18:47.863621 1471394 system_pods.go:126] duration metric: took 10.255127ms to wait for k8s-apps to be running ...
	I1002 22:18:47.863635 1471394 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:18:47.863689 1471394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:18:47.880954 1471394 system_svc.go:56] duration metric: took 17.310589ms WaitForService to wait for kubelet
	I1002 22:18:47.880983 1471394 kubeadm.go:586] duration metric: took 10.918021245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:18:47.881002 1471394 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:18:47.884495 1471394 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:18:47.884528 1471394 node_conditions.go:123] node cpu capacity is 2
	I1002 22:18:47.884541 1471394 node_conditions.go:105] duration metric: took 3.532088ms to run NodePressure ...
	I1002 22:18:47.884553 1471394 start.go:241] waiting for startup goroutines ...
	I1002 22:18:47.884565 1471394 start.go:246] waiting for cluster config update ...
	I1002 22:18:47.884583 1471394 start.go:255] writing updated cluster config ...
	I1002 22:18:47.884874 1471394 ssh_runner.go:195] Run: rm -f paused
	I1002 22:18:47.888950 1471394 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:18:47.892876 1471394 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 22:18:49.899735 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:18:48.927837 1469015 out.go:252]   - Booting up control plane ...
	I1002 22:18:48.927941 1469015 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:18:48.928022 1469015 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:18:48.928092 1469015 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:18:48.965877 1469015 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:18:48.965994 1469015 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:18:48.977967 1469015 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:18:48.980386 1469015 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:18:48.980738 1469015 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:18:49.132675 1469015 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:18:49.132802 1469015 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:18:50.138969 1469015 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001293211s
	I1002 22:18:50.139082 1469015 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:18:50.139168 1469015 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1002 22:18:50.139262 1469015 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:18:50.139344 1469015 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1002 22:18:52.398467 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:18:54.403622 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:18:56.898368 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:18:57.339828 1469015 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 7.202235828s
	I1002 22:18:57.929769 1469015 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.792573735s
	I1002 22:18:59.139750 1469015 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.002386073s
	I1002 22:18:59.165026 1469015 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 22:18:59.183011 1469015 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 22:18:59.201173 1469015 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 22:18:59.201382 1469015 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-975002 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 22:18:59.219148 1469015 kubeadm.go:318] [bootstrap-token] Using token: hf2oiw.qzyeh524x9w4di8u
	I1002 22:18:59.222617 1469015 out.go:252]   - Configuring RBAC rules ...
	I1002 22:18:59.222757 1469015 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 22:18:59.229488 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 22:18:59.241677 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 22:18:59.248084 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 22:18:59.253811 1469015 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 22:18:59.259740 1469015 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 22:18:59.550870 1469015 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 22:19:00.226768 1469015 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 22:19:00.552936 1469015 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 22:19:00.558581 1469015 kubeadm.go:318] 
	I1002 22:19:00.558664 1469015 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 22:19:00.558671 1469015 kubeadm.go:318] 
	I1002 22:19:00.558753 1469015 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 22:19:00.558758 1469015 kubeadm.go:318] 
	I1002 22:19:00.558785 1469015 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 22:19:00.558847 1469015 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 22:19:00.558901 1469015 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 22:19:00.558905 1469015 kubeadm.go:318] 
	I1002 22:19:00.558962 1469015 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 22:19:00.558967 1469015 kubeadm.go:318] 
	I1002 22:19:00.559017 1469015 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 22:19:00.559023 1469015 kubeadm.go:318] 
	I1002 22:19:00.559078 1469015 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 22:19:00.559178 1469015 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 22:19:00.559252 1469015 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 22:19:00.559257 1469015 kubeadm.go:318] 
	I1002 22:19:00.559349 1469015 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 22:19:00.559430 1469015 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 22:19:00.559435 1469015 kubeadm.go:318] 
	I1002 22:19:00.559524 1469015 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token hf2oiw.qzyeh524x9w4di8u \
	I1002 22:19:00.559635 1469015 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 22:19:00.559656 1469015 kubeadm.go:318] 	--control-plane 
	I1002 22:19:00.559661 1469015 kubeadm.go:318] 
	I1002 22:19:00.559761 1469015 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 22:19:00.559766 1469015 kubeadm.go:318] 
	I1002 22:19:00.559852 1469015 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token hf2oiw.qzyeh524x9w4di8u \
	I1002 22:19:00.559959 1469015 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 22:19:00.562829 1469015 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:19:00.563068 1469015 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:19:00.563179 1469015 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:19:00.563275 1469015 cni.go:84] Creating CNI manager for ""
	I1002 22:19:00.563298 1469015 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:19:00.566853 1469015 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1002 22:18:58.900099 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:00.902930 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:19:00.569902 1469015 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:19:00.588293 1469015 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 22:19:00.588319 1469015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 22:19:00.638521 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:19:01.293210 1469015 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:19:01.293291 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:01.293358 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-975002 minikube.k8s.io/updated_at=2025_10_02T22_19_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=no-preload-975002 minikube.k8s.io/primary=true
	I1002 22:19:01.602415 1469015 ops.go:34] apiserver oom_adj: -16
	I1002 22:19:01.602540 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:02.102816 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:02.603363 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:03.102729 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:03.602953 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:04.102788 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:04.603260 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:05.102657 1469015 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:19:05.269135 1469015 kubeadm.go:1113] duration metric: took 3.97589838s to wait for elevateKubeSystemPrivileges
	I1002 22:19:05.269177 1469015 kubeadm.go:402] duration metric: took 22.943334557s to StartCluster
	I1002 22:19:05.269206 1469015 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:05.269291 1469015 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:19:05.271437 1469015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:05.271828 1469015 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:19:05.272190 1469015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:19:05.272370 1469015 config.go:182] Loaded profile config "no-preload-975002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:19:05.272387 1469015 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:19:05.272511 1469015 addons.go:69] Setting storage-provisioner=true in profile "no-preload-975002"
	I1002 22:19:05.272534 1469015 addons.go:238] Setting addon storage-provisioner=true in "no-preload-975002"
	I1002 22:19:05.272572 1469015 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:19:05.272672 1469015 addons.go:69] Setting default-storageclass=true in profile "no-preload-975002"
	I1002 22:19:05.272684 1469015 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-975002"
	I1002 22:19:05.273163 1469015 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:05.273818 1469015 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:05.275562 1469015 out.go:179] * Verifying Kubernetes components...
	I1002 22:19:05.278736 1469015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:19:05.328979 1469015 addons.go:238] Setting addon default-storageclass=true in "no-preload-975002"
	I1002 22:19:05.329094 1469015 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:19:05.329752 1469015 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:05.335568 1469015 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:19:05.338629 1469015 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:19:05.338666 1469015 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:19:05.338754 1469015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:05.386885 1469015 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:19:05.386920 1469015 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:19:05.387019 1469015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:05.427655 1469015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34576 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:05.431370 1469015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34576 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:05.903804 1469015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:19:05.908412 1469015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:19:05.908596 1469015 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 22:19:05.928210 1469015 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:19:06.745502 1469015 node_ready.go:35] waiting up to 6m0s for node "no-preload-975002" to be "Ready" ...
	I1002 22:19:06.745830 1469015 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1002 22:19:06.806136 1469015 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1002 22:19:03.399969 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:05.423860 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:19:06.809121 1469015 addons.go:514] duration metric: took 1.53671385s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 22:19:07.256252 1469015 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-975002" context rescaled to 1 replicas
	W1002 22:19:07.899654 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:10.398010 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:08.748541 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:10.749138 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:12.749360 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:12.398627 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:14.898925 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:15.248829 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:17.749454 1469015 node_ready.go:57] node "no-preload-975002" has "Ready":"False" status (will retry)
	W1002 22:19:17.399980 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	W1002 22:19:19.899037 1471394 pod_ready.go:104] pod "coredns-66bc5c9577-n47rb" is not "Ready", error: <nil>
	I1002 22:19:20.904548 1471394 pod_ready.go:94] pod "coredns-66bc5c9577-n47rb" is "Ready"
	I1002 22:19:20.904576 1471394 pod_ready.go:86] duration metric: took 33.011675469s for pod "coredns-66bc5c9577-n47rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.907496 1471394 pod_ready.go:83] waiting for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.911938 1471394 pod_ready.go:94] pod "etcd-embed-certs-080134" is "Ready"
	I1002 22:19:20.911972 1471394 pod_ready.go:86] duration metric: took 4.45048ms for pod "etcd-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.914304 1471394 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.918893 1471394 pod_ready.go:94] pod "kube-apiserver-embed-certs-080134" is "Ready"
	I1002 22:19:20.918920 1471394 pod_ready.go:86] duration metric: took 4.591261ms for pod "kube-apiserver-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:20.921085 1471394 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.097644 1471394 pod_ready.go:94] pod "kube-controller-manager-embed-certs-080134" is "Ready"
	I1002 22:19:21.097671 1471394 pod_ready.go:86] duration metric: took 176.560085ms for pod "kube-controller-manager-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.297761 1471394 pod_ready.go:83] waiting for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.696824 1471394 pod_ready.go:94] pod "kube-proxy-7lq28" is "Ready"
	I1002 22:19:21.696855 1471394 pod_ready.go:86] duration metric: took 399.06335ms for pod "kube-proxy-7lq28" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.897330 1471394 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.296795 1471394 pod_ready.go:94] pod "kube-scheduler-embed-certs-080134" is "Ready"
	I1002 22:19:22.296828 1471394 pod_ready.go:86] duration metric: took 399.468273ms for pod "kube-scheduler-embed-certs-080134" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.296841 1471394 pod_ready.go:40] duration metric: took 34.407859012s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:19:22.362673 1471394 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:19:22.365724 1471394 out.go:179] * Done! kubectl is now configured to use "embed-certs-080134" cluster and "default" namespace by default
	I1002 22:19:20.248320 1469015 node_ready.go:49] node "no-preload-975002" is "Ready"
	I1002 22:19:20.248352 1469015 node_ready.go:38] duration metric: took 13.502816799s for node "no-preload-975002" to be "Ready" ...
	I1002 22:19:20.248367 1469015 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:19:20.248430 1469015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:19:20.261347 1469015 api_server.go:72] duration metric: took 14.989475418s to wait for apiserver process to appear ...
	I1002 22:19:20.261372 1469015 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:19:20.261391 1469015 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:19:20.269713 1469015 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 22:19:20.271031 1469015 api_server.go:141] control plane version: v1.34.1
	I1002 22:19:20.271053 1469015 api_server.go:131] duration metric: took 9.67464ms to wait for apiserver health ...
	I1002 22:19:20.271062 1469015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:19:20.274608 1469015 system_pods.go:59] 8 kube-system pods found
	I1002 22:19:20.274640 1469015 system_pods.go:61] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.274646 1469015 system_pods.go:61] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.274654 1469015 system_pods.go:61] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.274660 1469015 system_pods.go:61] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.274665 1469015 system_pods.go:61] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.274670 1469015 system_pods.go:61] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.274676 1469015 system_pods.go:61] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.274683 1469015 system_pods.go:61] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:19:20.274687 1469015 system_pods.go:74] duration metric: took 3.620185ms to wait for pod list to return data ...
	I1002 22:19:20.274695 1469015 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:19:20.280725 1469015 default_sa.go:45] found service account: "default"
	I1002 22:19:20.280751 1469015 default_sa.go:55] duration metric: took 6.050599ms for default service account to be created ...
	I1002 22:19:20.280761 1469015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:19:20.283528 1469015 system_pods.go:86] 8 kube-system pods found
	I1002 22:19:20.283561 1469015 system_pods.go:89] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.283568 1469015 system_pods.go:89] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.283574 1469015 system_pods.go:89] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.283579 1469015 system_pods.go:89] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.283584 1469015 system_pods.go:89] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.283588 1469015 system_pods.go:89] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.283592 1469015 system_pods.go:89] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.283598 1469015 system_pods.go:89] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:19:20.283619 1469015 retry.go:31] will retry after 285.800121ms: missing components: kube-dns
	I1002 22:19:20.580010 1469015 system_pods.go:86] 8 kube-system pods found
	I1002 22:19:20.580047 1469015 system_pods.go:89] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.580054 1469015 system_pods.go:89] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.580063 1469015 system_pods.go:89] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.580067 1469015 system_pods.go:89] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.580072 1469015 system_pods.go:89] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.580077 1469015 system_pods.go:89] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.580081 1469015 system_pods.go:89] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.580091 1469015 system_pods.go:89] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 22:19:20.580106 1469015 retry.go:31] will retry after 343.665312ms: missing components: kube-dns
	I1002 22:19:20.933271 1469015 system_pods.go:86] 8 kube-system pods found
	I1002 22:19:20.933307 1469015 system_pods.go:89] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:19:20.933315 1469015 system_pods.go:89] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:19:20.933321 1469015 system_pods.go:89] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:19:20.933325 1469015 system_pods.go:89] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running
	I1002 22:19:20.933330 1469015 system_pods.go:89] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running
	I1002 22:19:20.933334 1469015 system_pods.go:89] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:19:20.933340 1469015 system_pods.go:89] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running
	I1002 22:19:20.933344 1469015 system_pods.go:89] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Running
	I1002 22:19:20.933352 1469015 system_pods.go:126] duration metric: took 652.584288ms to wait for k8s-apps to be running ...
	I1002 22:19:20.933360 1469015 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:19:20.933419 1469015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:19:20.954574 1469015 system_svc.go:56] duration metric: took 21.205386ms WaitForService to wait for kubelet
	I1002 22:19:20.954602 1469015 kubeadm.go:586] duration metric: took 15.682736643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:19:20.954621 1469015 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:19:20.969765 1469015 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:19:20.969799 1469015 node_conditions.go:123] node cpu capacity is 2
	I1002 22:19:20.969819 1469015 node_conditions.go:105] duration metric: took 15.185958ms to run NodePressure ...
	I1002 22:19:20.969836 1469015 start.go:241] waiting for startup goroutines ...
	I1002 22:19:20.969847 1469015 start.go:246] waiting for cluster config update ...
	I1002 22:19:20.969862 1469015 start.go:255] writing updated cluster config ...
	I1002 22:19:20.970232 1469015 ssh_runner.go:195] Run: rm -f paused
	I1002 22:19:20.977402 1469015 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:19:20.983646 1469015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rj4bn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.989548 1469015 pod_ready.go:94] pod "coredns-66bc5c9577-rj4bn" is "Ready"
	I1002 22:19:21.989572 1469015 pod_ready.go:86] duration metric: took 1.005900106s for pod "coredns-66bc5c9577-rj4bn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.992406 1469015 pod_ready.go:83] waiting for pod "etcd-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.997134 1469015 pod_ready.go:94] pod "etcd-no-preload-975002" is "Ready"
	I1002 22:19:21.997213 1469015 pod_ready.go:86] duration metric: took 4.778466ms for pod "etcd-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:21.999604 1469015 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.008465 1469015 pod_ready.go:94] pod "kube-apiserver-no-preload-975002" is "Ready"
	I1002 22:19:22.008496 1469015 pod_ready.go:86] duration metric: took 8.861043ms for pod "kube-apiserver-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.011576 1469015 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.187884 1469015 pod_ready.go:94] pod "kube-controller-manager-no-preload-975002" is "Ready"
	I1002 22:19:22.187912 1469015 pod_ready.go:86] duration metric: took 176.308323ms for pod "kube-controller-manager-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.393864 1469015 pod_ready.go:83] waiting for pod "kube-proxy-lzzt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.787934 1469015 pod_ready.go:94] pod "kube-proxy-lzzt4" is "Ready"
	I1002 22:19:22.787962 1469015 pod_ready.go:86] duration metric: took 394.069322ms for pod "kube-proxy-lzzt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:22.988369 1469015 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:23.387713 1469015 pod_ready.go:94] pod "kube-scheduler-no-preload-975002" is "Ready"
	I1002 22:19:23.387749 1469015 pod_ready.go:86] duration metric: took 399.351959ms for pod "kube-scheduler-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:19:23.387762 1469015 pod_ready.go:40] duration metric: took 2.410324146s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:19:23.436907 1469015 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:19:23.440526 1469015 out.go:179] * Done! kubectl is now configured to use "no-preload-975002" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.773875109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.781620167Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.782809176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.799975422Z" level=info msg="Created container e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v/dashboard-metrics-scraper" id=35f36a5a-1e61-4f17-a6f5-a6271aa4ffb5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.80236143Z" level=info msg="Starting container: e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86" id=f59b01e4-f225-4cc3-a0a4-37a02a5a3612 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:19:21 embed-certs-080134 crio[654]: time="2025-10-02T22:19:21.805298789Z" level=info msg="Started container" PID=1648 containerID=e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v/dashboard-metrics-scraper id=f59b01e4-f225-4cc3-a0a4-37a02a5a3612 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00659fd51707924d8df8c83d07f15861e8e8004af7ab2b4fd4e38114edbc1397
	Oct 02 22:19:21 embed-certs-080134 conmon[1646]: conmon e2059f55d5f7c75a7bcb <ninfo>: container 1648 exited with status 1
	Oct 02 22:19:22 embed-certs-080134 crio[654]: time="2025-10-02T22:19:22.246767807Z" level=info msg="Removing container: 32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3" id=5549ccc3-6c59-4254-811b-307e858d0839 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:19:22 embed-certs-080134 crio[654]: time="2025-10-02T22:19:22.257264529Z" level=info msg="Error loading conmon cgroup of container 32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3: cgroup deleted" id=5549ccc3-6c59-4254-811b-307e858d0839 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:19:22 embed-certs-080134 crio[654]: time="2025-10-02T22:19:22.261829837Z" level=info msg="Removed container 32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v/dashboard-metrics-scraper" id=5549ccc3-6c59-4254-811b-307e858d0839 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.162535083Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.167439077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.167473858Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.167496339Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.170680848Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.170714382Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.170736781Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.174947167Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.174983696Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.175006637Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.17828172Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.178351659Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.1783768Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.181540673Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:19:26 embed-certs-080134 crio[654]: time="2025-10-02T22:19:26.181578941Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e2059f55d5f7c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago       Exited              dashboard-metrics-scraper   2                   00659fd517079       dashboard-metrics-scraper-6ffb444bf9-nw57v   kubernetes-dashboard
	2e99de6785ee2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   ab6a56c3f8d0b       storage-provisioner                          kube-system
	e25231cdd4c4f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   363c7468abbed       kubernetes-dashboard-855c9754f9-9jzrx        kubernetes-dashboard
	8de2557fb7f6a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   ab6a56c3f8d0b       storage-provisioner                          kube-system
	0a48ed5b3fbf4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   6551171810c82       coredns-66bc5c9577-n47rb                     kube-system
	37a4e6d8f9f06       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   2b38b14b39440       busybox                                      default
	b37b9a0b7fd29       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   bfa345ea2462a       kindnet-mv8z6                                kube-system
	f94e9eeb4727c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   985b29e1850bd       kube-proxy-7lq28                             kube-system
	31cbedff1fe1e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   54411b658b94f       etcd-embed-certs-080134                      kube-system
	7a11f9726f5e6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   8641ea91c7eda       kube-scheduler-embed-certs-080134            kube-system
	a71c5b7dea391       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   1837dd5a98103       kube-controller-manager-embed-certs-080134   kube-system
	abb55666df5bd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   3fa8218d8a2cb       kube-apiserver-embed-certs-080134            kube-system
	
	
	==> coredns [0a48ed5b3fbf47202add398ac63a0933859619e11b2d3a7a92ec0f84fd39b13d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53119 - 29270 "HINFO IN 7210804044794814178.1397832052451397412. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.063801773s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-080134
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-080134
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=embed-certs-080134
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_17_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:17:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-080134
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:19:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:19:15 +0000   Thu, 02 Oct 2025 22:16:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:19:15 +0000   Thu, 02 Oct 2025 22:16:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:19:15 +0000   Thu, 02 Oct 2025 22:16:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:19:15 +0000   Thu, 02 Oct 2025 22:17:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-080134
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 1768e21c00254ff9a86ef445008105e3
	  System UUID:                de46e61e-5f26-496d-bb31-d89253767b5d
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-n47rb                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-embed-certs-080134                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-mv8z6                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-embed-certs-080134             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-embed-certs-080134    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-7lq28                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-embed-certs-080134             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nw57v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9jzrx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Warning  CgroupV1                 2m42s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m42s (x8 over 2m42s)  kubelet          Node embed-certs-080134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m42s (x8 over 2m42s)  kubelet          Node embed-certs-080134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m42s (x8 over 2m42s)  kubelet          Node embed-certs-080134 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node embed-certs-080134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node embed-certs-080134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node embed-certs-080134 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m24s                  node-controller  Node embed-certs-080134 event: Registered Node embed-certs-080134 in Controller
	  Normal   NodeReady                101s                   kubelet          Node embed-certs-080134 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node embed-certs-080134 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node embed-certs-080134 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node embed-certs-080134 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node embed-certs-080134 event: Registered Node embed-certs-080134 in Controller
	
	
	==> dmesg <==
	[Oct 2 21:47] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:49] overlayfs: idmapped layers are currently not supported
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:18] overlayfs: idmapped layers are currently not supported
	[ +12.782696] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [31cbedff1fe1e823d96eb4858d03eb2aa72e9a77c0ffc8651909298dfb2f2c47] <==
	{"level":"warn","ts":"2025-10-02T22:18:41.787414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:41.826152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:41.885152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:41.946299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:41.990692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.042982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.095747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.134121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.153515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.189035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.210473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.234260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.278938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.312501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.334256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.382173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.409463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.440282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.481653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.511287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.544783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.600648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.625672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.680952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:18:42.741586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59434","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:19:39 up  7:01,  0 user,  load average: 5.62, 3.59, 2.57
	Linux embed-certs-080134 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b37b9a0b7fd291813508411cfc8272652f0c2752c32e03b610e896ac45ffcb46] <==
	I1002 22:18:45.915080       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:18:45.915299       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 22:18:45.915436       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:18:45.915448       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:18:45.915458       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:18:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:18:46.160292       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:18:46.160311       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:18:46.160319       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:18:46.160587       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:19:16.160393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1002 22:19:16.160380       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:19:16.160542       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:19:16.161764       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1002 22:19:17.761184       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:19:17.761224       1 metrics.go:72] Registering metrics
	I1002 22:19:17.761301       1 controller.go:711] "Syncing nftables rules"
	I1002 22:19:26.162200       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:19:26.162245       1 main.go:301] handling current node
	I1002 22:19:36.163493       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1002 22:19:36.163542       1 main.go:301] handling current node
	
	
	==> kube-apiserver [abb55666df5bda820c6e6516b076744eb1e7ce0ae95bd5d8e416c3dca835aa10] <==
	I1002 22:18:44.479708       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:18:44.479727       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:18:44.485445       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1002 22:18:44.487859       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:18:44.488897       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 22:18:44.488995       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 22:18:44.489249       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 22:18:44.490052       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:18:44.490064       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:18:44.490660       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 22:18:44.490696       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:18:44.526979       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 22:18:44.528648       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1002 22:18:44.592824       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1002 22:18:44.822464       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:18:45.107334       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:18:46.889856       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 22:18:47.127598       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:18:47.233796       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:18:47.348366       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:18:47.613265       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.23.136"}
	I1002 22:18:47.642338       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.204.160"}
	I1002 22:18:49.567400       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 22:18:49.606894       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 22:18:49.904978       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a71c5b7dea391f98f7c0a495ab685f227e17c98ff9866d2a9493764e53e86da5] <==
	I1002 22:18:49.563651       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:18:49.566236       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 22:18:49.566464       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 22:18:49.568646       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 22:18:49.569001       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 22:18:49.570015       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 22:18:49.571347       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:18:49.572588       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:18:49.579868       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:18:49.596646       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 22:18:49.596838       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 22:18:49.596913       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:18:49.597231       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 22:18:49.597347       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 22:18:49.598560       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 22:18:49.604133       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 22:18:49.604316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 22:18:49.604431       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 22:18:49.604547       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 22:18:49.609208       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:18:49.609310       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 22:18:49.611444       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 22:18:49.634391       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:18:49.634493       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:18:49.634526       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [f94e9eeb4727cf987f9a8b1a30b32e826140c814cd7eaca8aa5c10744f968eaa] <==
	I1002 22:18:47.514294       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:18:47.687145       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:18:47.787247       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:18:47.807794       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 22:18:47.807883       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:18:47.899961       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:18:47.900080       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:18:47.905691       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:18:47.906158       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:18:47.906372       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:18:47.907600       1 config.go:200] "Starting service config controller"
	I1002 22:18:47.907655       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:18:47.907697       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:18:47.907739       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:18:47.907781       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:18:47.907812       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:18:47.908458       1 config.go:309] "Starting node config controller"
	I1002 22:18:47.910952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:18:47.911012       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:18:48.008144       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:18:48.008244       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:18:48.008264       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7a11f9726f5e6292b466176dd31607960e89c89de96d69efda2e47f6d6bf355d] <==
	I1002 22:18:41.442660       1 serving.go:386] Generated self-signed cert in-memory
	I1002 22:18:47.217765       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 22:18:47.239456       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:18:47.277349       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 22:18:47.277389       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 22:18:47.277436       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:18:47.277444       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:18:47.277458       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:18:47.277464       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:18:47.280569       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:18:47.280608       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 22:18:47.390899       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:18:47.391047       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 22:18:47.391226       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:18:50 embed-certs-080134 kubelet[780]: I1002 22:18:50.131152     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0467aada-bdde-41bf-96cd-f172f8e2e4c3-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-nw57v\" (UID: \"0467aada-bdde-41bf-96cd-f172f8e2e4c3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v"
	Oct 02 22:18:50 embed-certs-080134 kubelet[780]: I1002 22:18:50.131209     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmg47\" (UniqueName: \"kubernetes.io/projected/0467aada-bdde-41bf-96cd-f172f8e2e4c3-kube-api-access-nmg47\") pod \"dashboard-metrics-scraper-6ffb444bf9-nw57v\" (UID: \"0467aada-bdde-41bf-96cd-f172f8e2e4c3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v"
	Oct 02 22:18:50 embed-certs-080134 kubelet[780]: I1002 22:18:50.131231     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aef97e4d-9f32-404b-9bac-6f18e92b149a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-9jzrx\" (UID: \"aef97e4d-9f32-404b-9bac-6f18e92b149a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9jzrx"
	Oct 02 22:18:50 embed-certs-080134 kubelet[780]: I1002 22:18:50.131250     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sswfv\" (UniqueName: \"kubernetes.io/projected/aef97e4d-9f32-404b-9bac-6f18e92b149a-kube-api-access-sswfv\") pod \"kubernetes-dashboard-855c9754f9-9jzrx\" (UID: \"aef97e4d-9f32-404b-9bac-6f18e92b149a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9jzrx"
	Oct 02 22:18:50 embed-certs-080134 kubelet[780]: I1002 22:18:50.783908     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 22:18:51 embed-certs-080134 kubelet[780]: W1002 22:18:51.327562     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/d75a770c7fe51ad520c4a8157af035986e25e25f43cc6cdc2922623f52bebb2e/crio-363c7468abbed6095f934e805900472871026ff946b0e37d746a4f9925f34944 WatchSource:0}: Error finding container 363c7468abbed6095f934e805900472871026ff946b0e37d746a4f9925f34944: Status 404 returned error can't find the container with id 363c7468abbed6095f934e805900472871026ff946b0e37d746a4f9925f34944
	Oct 02 22:19:00 embed-certs-080134 kubelet[780]: I1002 22:19:00.254622     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9jzrx" podStartSLOduration=2.197927269 podStartE2EDuration="10.251241705s" podCreationTimestamp="2025-10-02 22:18:50 +0000 UTC" firstStartedPulling="2025-10-02 22:18:51.334272468 +0000 UTC m=+15.827193275" lastFinishedPulling="2025-10-02 22:18:59.387586905 +0000 UTC m=+23.880507711" observedRunningTime="2025-10-02 22:19:00.238726312 +0000 UTC m=+24.731647118" watchObservedRunningTime="2025-10-02 22:19:00.251241705 +0000 UTC m=+24.744162512"
	Oct 02 22:19:06 embed-certs-080134 kubelet[780]: I1002 22:19:06.198524     780 scope.go:117] "RemoveContainer" containerID="50dbec95cce22e8957360e45d8ae0da5601aac24db5524b604063a4262efa931"
	Oct 02 22:19:07 embed-certs-080134 kubelet[780]: I1002 22:19:07.203260     780 scope.go:117] "RemoveContainer" containerID="32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3"
	Oct 02 22:19:07 embed-certs-080134 kubelet[780]: E1002 22:19:07.203933     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nw57v_kubernetes-dashboard(0467aada-bdde-41bf-96cd-f172f8e2e4c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v" podUID="0467aada-bdde-41bf-96cd-f172f8e2e4c3"
	Oct 02 22:19:07 embed-certs-080134 kubelet[780]: I1002 22:19:07.204068     780 scope.go:117] "RemoveContainer" containerID="50dbec95cce22e8957360e45d8ae0da5601aac24db5524b604063a4262efa931"
	Oct 02 22:19:08 embed-certs-080134 kubelet[780]: I1002 22:19:08.209117     780 scope.go:117] "RemoveContainer" containerID="32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3"
	Oct 02 22:19:08 embed-certs-080134 kubelet[780]: E1002 22:19:08.210531     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nw57v_kubernetes-dashboard(0467aada-bdde-41bf-96cd-f172f8e2e4c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v" podUID="0467aada-bdde-41bf-96cd-f172f8e2e4c3"
	Oct 02 22:19:11 embed-certs-080134 kubelet[780]: I1002 22:19:11.284143     780 scope.go:117] "RemoveContainer" containerID="32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3"
	Oct 02 22:19:11 embed-certs-080134 kubelet[780]: E1002 22:19:11.284332     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nw57v_kubernetes-dashboard(0467aada-bdde-41bf-96cd-f172f8e2e4c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v" podUID="0467aada-bdde-41bf-96cd-f172f8e2e4c3"
	Oct 02 22:19:18 embed-certs-080134 kubelet[780]: I1002 22:19:18.231649     780 scope.go:117] "RemoveContainer" containerID="8de2557fb7f6a4398634b1d07698a2116674409032a382a4eee752ca093ec156"
	Oct 02 22:19:21 embed-certs-080134 kubelet[780]: I1002 22:19:21.769636     780 scope.go:117] "RemoveContainer" containerID="32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3"
	Oct 02 22:19:22 embed-certs-080134 kubelet[780]: I1002 22:19:22.244697     780 scope.go:117] "RemoveContainer" containerID="32fe8db9fd90fe1df3700e107a3f80a3e9c596cf4da7edeb1a801f5383610fb3"
	Oct 02 22:19:22 embed-certs-080134 kubelet[780]: I1002 22:19:22.245033     780 scope.go:117] "RemoveContainer" containerID="e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86"
	Oct 02 22:19:22 embed-certs-080134 kubelet[780]: E1002 22:19:22.245222     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nw57v_kubernetes-dashboard(0467aada-bdde-41bf-96cd-f172f8e2e4c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v" podUID="0467aada-bdde-41bf-96cd-f172f8e2e4c3"
	Oct 02 22:19:31 embed-certs-080134 kubelet[780]: I1002 22:19:31.284323     780 scope.go:117] "RemoveContainer" containerID="e2059f55d5f7c75a7bcbf3d3b0c893e729d13ec8d7d790e6a89255370cba6d86"
	Oct 02 22:19:31 embed-certs-080134 kubelet[780]: E1002 22:19:31.284535     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nw57v_kubernetes-dashboard(0467aada-bdde-41bf-96cd-f172f8e2e4c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nw57v" podUID="0467aada-bdde-41bf-96cd-f172f8e2e4c3"
	Oct 02 22:19:34 embed-certs-080134 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:19:34 embed-certs-080134 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:19:34 embed-certs-080134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e25231cdd4c4fd0dd69d5de90f20d75c497d5d57ba39068e59d1bd6e70ac3e8e] <==
	2025/10/02 22:18:59 Using namespace: kubernetes-dashboard
	2025/10/02 22:18:59 Using in-cluster config to connect to apiserver
	2025/10/02 22:18:59 Using secret token for csrf signing
	2025/10/02 22:18:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 22:18:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 22:18:59 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 22:18:59 Generating JWE encryption key
	2025/10/02 22:18:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 22:18:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 22:19:01 Initializing JWE encryption key from synchronized object
	2025/10/02 22:19:01 Creating in-cluster Sidecar client
	2025/10/02 22:19:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:19:01 Serving insecurely on HTTP port: 9090
	2025/10/02 22:19:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:18:59 Starting overwatch
	
	
	==> storage-provisioner [2e99de6785ee276082cf2ab23e0c125a5ecf20685dbddf201dd67fbad2b0bae0] <==
	I1002 22:19:18.289233       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 22:19:18.302236       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:19:18.302296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 22:19:18.305794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:21.762228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:26.022719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:29.621343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:32.675951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:35.698411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:35.705506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:19:35.705772       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:19:35.706204       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5c4053a-7381-4524-a450-046d4cf76d2d", APIVersion:"v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-080134_9f226230-6dd4-4dc5-a707-fa0b516135d2 became leader
	I1002 22:19:35.706475       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-080134_9f226230-6dd4-4dc5-a707-fa0b516135d2!
	W1002 22:19:35.721717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:35.738394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:19:35.806959       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-080134_9f226230-6dd4-4dc5-a707-fa0b516135d2!
	W1002 22:19:37.742121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:37.747949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:39.751803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:19:39.757360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8de2557fb7f6a4398634b1d07698a2116674409032a382a4eee752ca093ec156] <==
	I1002 22:18:47.298992       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 22:19:17.394260       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-080134 -n embed-certs-080134
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-080134 -n embed-certs-080134: exit status 2 (372.436837ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-080134 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-007061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-007061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (271.683762ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:20:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-007061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-007061
helpers_test.go:243: (dbg) docker inspect newest-cni-007061:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01",
	        "Created": "2025-10-02T22:19:49.208440001Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1477272,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:19:49.269423615Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/hostname",
	        "HostsPath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/hosts",
	        "LogPath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01-json.log",
	        "Name": "/newest-cni-007061",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-007061:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-007061",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01",
	                "LowerDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-007061",
	                "Source": "/var/lib/docker/volumes/newest-cni-007061/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-007061",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-007061",
	                "name.minikube.sigs.k8s.io": "newest-cni-007061",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa52161dac303540cd92a92279e961a7bf4ddd52a47079c75ec2dacd28240a64",
	            "SandboxKey": "/var/run/docker/netns/fa52161dac30",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34586"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34587"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34590"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34588"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34589"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-007061": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:01:6e:8c:29:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fe85bc902fc6ddde3be87025823d1d70984e1f5f4e60ca56b5f7626fbe228993",
	                    "EndpointID": "02c1fe62521e911d00dbaefe71d4d2f692524c90f4186fc80780401402c12560",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-007061",
	                        "3375b860c995"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-007061 -n newest-cni-007061
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-007061 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-007061 logs -n 25: (1.150742874s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-230628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-230628 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-230628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:16 UTC │ 02 Oct 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-230628 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ pause   │ -p default-k8s-diff-port-230628 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-080134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ delete  │ -p disable-driver-mounts-607037                                                                                                                                                                                                               │ disable-driver-mounts-607037 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ stop    │ -p embed-certs-080134 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable dashboard -p embed-certs-080134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ image   │ embed-certs-080134 image list --format=json                                                                                                                                                                                                   │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ pause   │ -p embed-certs-080134 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ stop    │ -p no-preload-975002 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ delete  │ -p embed-certs-080134                                                                                                                                                                                                                         │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ delete  │ -p embed-certs-080134                                                                                                                                                                                                                         │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ start   │ -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable dashboard -p no-preload-975002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-007061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:19:49
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:19:49.496179 1477309 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:19:49.496389 1477309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:19:49.496415 1477309 out.go:374] Setting ErrFile to fd 2...
	I1002 22:19:49.496491 1477309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:19:49.496874 1477309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:19:49.497332 1477309 out.go:368] Setting JSON to false
	I1002 22:19:49.498329 1477309 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25315,"bootTime":1759418275,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:19:49.498436 1477309 start.go:140] virtualization:  
	I1002 22:19:49.503788 1477309 out.go:179] * [no-preload-975002] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:19:49.507004 1477309 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:19:49.507063 1477309 notify.go:220] Checking for updates...
	I1002 22:19:49.510169 1477309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:19:49.513429 1477309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:19:49.516260 1477309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:19:49.519905 1477309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:19:49.522835 1477309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:19:49.526302 1477309 config.go:182] Loaded profile config "no-preload-975002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:19:49.526929 1477309 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:19:49.567448 1477309 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:19:49.567564 1477309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:19:49.675094 1477309 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-02 22:19:49.663819494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:19:49.675197 1477309 docker.go:318] overlay module found
	I1002 22:19:49.678714 1477309 out.go:179] * Using the docker driver based on existing profile
	I1002 22:19:49.682400 1477309 start.go:304] selected driver: docker
	I1002 22:19:49.682420 1477309 start.go:924] validating driver "docker" against &{Name:no-preload-975002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:19:49.682512 1477309 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:19:49.683199 1477309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:19:49.821885 1477309 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-02 22:19:49.802428202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:19:49.824390 1477309 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:19:49.824438 1477309 cni.go:84] Creating CNI manager for ""
	I1002 22:19:49.825609 1477309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:19:49.825701 1477309 start.go:348] cluster config:
	{Name:no-preload-975002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:19:49.831317 1477309 out.go:179] * Starting "no-preload-975002" primary control-plane node in "no-preload-975002" cluster
	I1002 22:19:49.834369 1477309 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:19:49.837344 1477309 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:19:49.840207 1477309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:19:49.840363 1477309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/config.json ...
	I1002 22:19:49.840676 1477309 cache.go:107] acquiring lock: {Name:mk152d517c208d1664f0679a4af4b498aa642974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:19:49.840752 1477309 cache.go:115] /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 22:19:49.840760 1477309 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.546µs
	I1002 22:19:49.840772 1477309 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 22:19:49.840783 1477309 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:19:49.840887 1477309 cache.go:107] acquiring lock: {Name:mk2e771dc7cc166c10d5027d71c2d0c2d47c7696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:19:49.840929 1477309 cache.go:115] /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1002 22:19:49.840934 1477309 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 59.125µs
	I1002 22:19:49.840941 1477309 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1002 22:19:49.840950 1477309 cache.go:107] acquiring lock: {Name:mk38333338ac0abd440be5fc80bf9b8871e73b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:19:49.840976 1477309 cache.go:115] /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1002 22:19:49.840981 1477309 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 31.868µs
	I1002 22:19:49.840986 1477309 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1002 22:19:49.840995 1477309 cache.go:107] acquiring lock: {Name:mk7bc75e6ce5346e78de53ff327072f66b3ea7e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:19:49.841022 1477309 cache.go:115] /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1002 22:19:49.841027 1477309 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.82µs
	I1002 22:19:49.841033 1477309 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1002 22:19:49.841041 1477309 cache.go:107] acquiring lock: {Name:mk7d54120f9f7c48f626ad9db8cac591d564bf34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:19:49.841066 1477309 cache.go:115] /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1002 22:19:49.841070 1477309 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.761µs
	I1002 22:19:49.841077 1477309 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1002 22:19:49.841087 1477309 cache.go:107] acquiring lock: {Name:mk522b4a5182e56669f0dc56587b197cfd9c9047 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:19:49.841116 1477309 cache.go:115] /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1002 22:19:49.841121 1477309 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 35.084µs
	I1002 22:19:49.841126 1477309 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1002 22:19:49.841135 1477309 cache.go:107] acquiring lock: {Name:mk57ef8ad91ffbf9bc4a6ed84f5d5165010d8f35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:19:49.841158 1477309 cache.go:115] /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1002 22:19:49.841163 1477309 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.898µs
	I1002 22:19:49.841168 1477309 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1002 22:19:49.841177 1477309 cache.go:107] acquiring lock: {Name:mk161e476c5626a22a5ba190c537ccb162465ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:19:49.841205 1477309 cache.go:115] /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1002 22:19:49.841210 1477309 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 34.083µs
	I1002 22:19:49.841215 1477309 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1002 22:19:49.841221 1477309 cache.go:87] Successfully saved all images to host disk.
	I1002 22:19:49.896677 1477309 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:19:49.896910 1477309 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:19:49.896933 1477309 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:19:49.896958 1477309 start.go:360] acquireMachinesLock for no-preload-975002: {Name:mk57491803cd4b07e208253ce34375590769441b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:19:49.897047 1477309 start.go:364] duration metric: took 72.876µs to acquireMachinesLock for "no-preload-975002"
	I1002 22:19:49.897102 1477309 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:19:49.897128 1477309 fix.go:54] fixHost starting: 
	I1002 22:19:49.897497 1477309 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:49.929150 1477309 fix.go:112] recreateIfNeeded on no-preload-975002: state=Stopped err=<nil>
	W1002 22:19:49.929187 1477309 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 22:19:49.107750 1476780 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-007061:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.64672435s)
	I1002 22:19:49.107784 1476780 kic.go:203] duration metric: took 4.646859658s to extract preloaded images to volume ...
	W1002 22:19:49.107939 1476780 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 22:19:49.108053 1476780 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 22:19:49.188913 1476780 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-007061 --name newest-cni-007061 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-007061 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-007061 --network newest-cni-007061 --ip 192.168.85.2 --volume newest-cni-007061:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 22:19:49.605030 1476780 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Running}}
	I1002 22:19:49.650858 1476780 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:19:49.690521 1476780 cli_runner.go:164] Run: docker exec newest-cni-007061 stat /var/lib/dpkg/alternatives/iptables
	I1002 22:19:49.764641 1476780 oci.go:144] the created container "newest-cni-007061" has a running status.
	I1002 22:19:49.764694 1476780 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa...
	I1002 22:19:50.975386 1476780 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 22:19:50.994719 1476780 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:19:51.014994 1476780 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 22:19:51.015021 1476780 kic_runner.go:114] Args: [docker exec --privileged newest-cni-007061 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 22:19:51.056244 1476780 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:19:51.075999 1476780 machine.go:93] provisionDockerMachine start ...
	I1002 22:19:51.076113 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:19:51.094007 1476780 main.go:141] libmachine: Using SSH client type: native
	I1002 22:19:51.094379 1476780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34586 <nil> <nil>}
	I1002 22:19:51.094396 1476780 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:19:51.094994 1476780 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55916->127.0.0.1:34586: read: connection reset by peer
	I1002 22:19:49.932675 1477309 out.go:252] * Restarting existing docker container for "no-preload-975002" ...
	I1002 22:19:49.932859 1477309 cli_runner.go:164] Run: docker start no-preload-975002
	I1002 22:19:50.393046 1477309 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:50.448478 1477309 kic.go:430] container "no-preload-975002" state is running.
	I1002 22:19:50.448865 1477309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-975002
	I1002 22:19:50.480688 1477309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/config.json ...
	I1002 22:19:50.480911 1477309 machine.go:93] provisionDockerMachine start ...
	I1002 22:19:50.480969 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:50.511015 1477309 main.go:141] libmachine: Using SSH client type: native
	I1002 22:19:50.511330 1477309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1002 22:19:50.511340 1477309 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:19:50.512198 1477309 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50840->127.0.0.1:34591: read: connection reset by peer
	I1002 22:19:53.645825 1477309 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-975002
	
	I1002 22:19:53.645849 1477309 ubuntu.go:182] provisioning hostname "no-preload-975002"
	I1002 22:19:53.645931 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:53.664411 1477309 main.go:141] libmachine: Using SSH client type: native
	I1002 22:19:53.664722 1477309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1002 22:19:53.664735 1477309 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-975002 && echo "no-preload-975002" | sudo tee /etc/hostname
	I1002 22:19:53.811855 1477309 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-975002
	
	I1002 22:19:53.811938 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:53.829758 1477309 main.go:141] libmachine: Using SSH client type: native
	I1002 22:19:53.830094 1477309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1002 22:19:53.830117 1477309 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-975002' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-975002/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-975002' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:19:53.966943 1477309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:19:53.967006 1477309 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:19:53.967038 1477309 ubuntu.go:190] setting up certificates
	I1002 22:19:53.967048 1477309 provision.go:84] configureAuth start
	I1002 22:19:53.967124 1477309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-975002
	I1002 22:19:53.984472 1477309 provision.go:143] copyHostCerts
	I1002 22:19:53.984560 1477309 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:19:53.984607 1477309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:19:53.984712 1477309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:19:53.984839 1477309 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:19:53.984852 1477309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:19:53.984893 1477309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:19:53.984978 1477309 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:19:53.984988 1477309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:19:53.985024 1477309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:19:53.985098 1477309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.no-preload-975002 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-975002]
	I1002 22:19:54.522676 1477309 provision.go:177] copyRemoteCerts
	I1002 22:19:54.522749 1477309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:19:54.522792 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:54.543805 1477309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:54.644185 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:19:54.671443 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 22:19:54.702412 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:19:54.740593 1477309 provision.go:87] duration metric: took 773.526078ms to configureAuth
	I1002 22:19:54.740625 1477309 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:19:54.740811 1477309 config.go:182] Loaded profile config "no-preload-975002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:19:54.740928 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:54.763687 1477309 main.go:141] libmachine: Using SSH client type: native
	I1002 22:19:54.763999 1477309 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1002 22:19:54.764014 1477309 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:19:55.216693 1477309 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:19:55.216720 1477309 machine.go:96] duration metric: took 4.735800864s to provisionDockerMachine
	I1002 22:19:55.216731 1477309 start.go:293] postStartSetup for "no-preload-975002" (driver="docker")
	I1002 22:19:55.216746 1477309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:19:55.216814 1477309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:19:55.216852 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:55.254893 1477309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:55.378548 1477309 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:19:55.382685 1477309 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:19:55.382710 1477309 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:19:55.382721 1477309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:19:55.382776 1477309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:19:55.382875 1477309 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:19:55.382983 1477309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:19:55.392180 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:19:55.421486 1477309 start.go:296] duration metric: took 204.730361ms for postStartSetup
	I1002 22:19:55.421580 1477309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:19:55.421620 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:55.439792 1477309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:55.554887 1477309 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:19:55.559693 1477309 fix.go:56] duration metric: took 5.662574631s for fixHost
	I1002 22:19:55.559720 1477309 start.go:83] releasing machines lock for "no-preload-975002", held for 5.662630869s
	I1002 22:19:55.559827 1477309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-975002
	I1002 22:19:55.607922 1477309 ssh_runner.go:195] Run: cat /version.json
	I1002 22:19:55.607972 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:55.608263 1477309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:19:55.608319 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:55.674979 1477309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:55.686148 1477309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:55.895999 1477309 ssh_runner.go:195] Run: systemctl --version
	I1002 22:19:55.902592 1477309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:19:55.955435 1477309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:19:55.963721 1477309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:19:55.963851 1477309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:19:55.972394 1477309 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:19:55.972459 1477309 start.go:495] detecting cgroup driver to use...
	I1002 22:19:55.972505 1477309 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:19:55.972582 1477309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:19:55.987647 1477309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:19:56.003669 1477309 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:19:56.003786 1477309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:19:56.022968 1477309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:19:56.037306 1477309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:19:56.167066 1477309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:19:56.337130 1477309 docker.go:234] disabling docker service ...
	I1002 22:19:56.337201 1477309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:19:56.357797 1477309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:19:56.372563 1477309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:19:56.517689 1477309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:19:56.650972 1477309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:19:56.665325 1477309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:19:56.680925 1477309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:19:56.680989 1477309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:56.690688 1477309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:19:56.690758 1477309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:56.699901 1477309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:56.708773 1477309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:56.717422 1477309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:19:56.725425 1477309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:56.735404 1477309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:56.747179 1477309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:56.760019 1477309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:19:56.768903 1477309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:19:56.778350 1477309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:19:56.911316 1477309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:19:57.098708 1477309 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:19:57.098780 1477309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:19:57.103044 1477309 start.go:563] Will wait 60s for crictl version
	I1002 22:19:57.103106 1477309 ssh_runner.go:195] Run: which crictl
	I1002 22:19:57.107387 1477309 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:19:57.137598 1477309 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:19:57.137731 1477309 ssh_runner.go:195] Run: crio --version
	I1002 22:19:57.179366 1477309 ssh_runner.go:195] Run: crio --version
	I1002 22:19:57.217248 1477309 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:19:54.254236 1476780 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-007061
	
	I1002 22:19:54.254354 1476780 ubuntu.go:182] provisioning hostname "newest-cni-007061"
	I1002 22:19:54.254434 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:19:54.280017 1476780 main.go:141] libmachine: Using SSH client type: native
	I1002 22:19:54.280336 1476780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34586 <nil> <nil>}
	I1002 22:19:54.280348 1476780 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-007061 && echo "newest-cni-007061" | sudo tee /etc/hostname
	I1002 22:19:54.436645 1476780 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-007061
	
	I1002 22:19:54.436797 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:19:54.467341 1476780 main.go:141] libmachine: Using SSH client type: native
	I1002 22:19:54.467636 1476780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34586 <nil> <nil>}
	I1002 22:19:54.467654 1476780 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-007061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-007061/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-007061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:19:54.610173 1476780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:19:54.610198 1476780 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:19:54.610224 1476780 ubuntu.go:190] setting up certificates
	I1002 22:19:54.610234 1476780 provision.go:84] configureAuth start
	I1002 22:19:54.610299 1476780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-007061
	I1002 22:19:54.626567 1476780 provision.go:143] copyHostCerts
	I1002 22:19:54.626637 1476780 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:19:54.626649 1476780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:19:54.626707 1476780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:19:54.626814 1476780 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:19:54.626823 1476780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:19:54.626844 1476780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:19:54.626910 1476780 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:19:54.626920 1476780 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:19:54.626941 1476780 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:19:54.627002 1476780 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.newest-cni-007061 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-007061]
	I1002 22:19:55.989832 1476780 provision.go:177] copyRemoteCerts
	I1002 22:19:55.989914 1476780 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:19:55.989961 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:19:56.014929 1476780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34586 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:19:56.122751 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:19:56.154638 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 22:19:56.181785 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:19:56.199423 1476780 provision.go:87] duration metric: took 1.589174964s to configureAuth
	I1002 22:19:56.199451 1476780 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:19:56.199638 1476780 config.go:182] Loaded profile config "newest-cni-007061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:19:56.199744 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:19:56.221870 1476780 main.go:141] libmachine: Using SSH client type: native
	I1002 22:19:56.222188 1476780 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34586 <nil> <nil>}
	I1002 22:19:56.222205 1476780 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:19:56.608984 1476780 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:19:56.609003 1476780 machine.go:96] duration metric: took 5.532980452s to provisionDockerMachine
	I1002 22:19:56.609012 1476780 client.go:171] duration metric: took 12.799562096s to LocalClient.Create
	I1002 22:19:56.609024 1476780 start.go:167] duration metric: took 12.799636211s to libmachine.API.Create "newest-cni-007061"
	I1002 22:19:56.609034 1476780 start.go:293] postStartSetup for "newest-cni-007061" (driver="docker")
	I1002 22:19:56.609043 1476780 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:19:56.609103 1476780 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:19:56.609145 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:19:56.634647 1476780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34586 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:19:56.739354 1476780 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:19:56.743024 1476780 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:19:56.743050 1476780 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:19:56.743064 1476780 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:19:56.743116 1476780 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:19:56.743192 1476780 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:19:56.743293 1476780 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:19:56.751994 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:19:56.773486 1476780 start.go:296] duration metric: took 164.438895ms for postStartSetup
	I1002 22:19:56.773845 1476780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-007061
	I1002 22:19:56.796716 1476780 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/config.json ...
	I1002 22:19:56.797008 1476780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:19:56.797059 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:19:56.824452 1476780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34586 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:19:56.944131 1476780 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:19:56.949268 1476780 start.go:128] duration metric: took 13.143503852s to createHost
	I1002 22:19:56.949301 1476780 start.go:83] releasing machines lock for "newest-cni-007061", held for 13.143647111s
	I1002 22:19:56.949386 1476780 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-007061
	I1002 22:19:56.972929 1476780 ssh_runner.go:195] Run: cat /version.json
	I1002 22:19:56.972980 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:19:56.973201 1476780 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:19:56.973264 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:19:57.000751 1476780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34586 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:19:57.016027 1476780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34586 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:19:57.196402 1476780 ssh_runner.go:195] Run: systemctl --version
	I1002 22:19:57.202960 1476780 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:19:57.251463 1476780 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:19:57.257147 1476780 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:19:57.257206 1476780 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:19:57.287887 1476780 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 22:19:57.287913 1476780 start.go:495] detecting cgroup driver to use...
	I1002 22:19:57.287945 1476780 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:19:57.287995 1476780 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:19:57.308643 1476780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:19:57.324059 1476780 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:19:57.324127 1476780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:19:57.341942 1476780 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:19:57.361118 1476780 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:19:57.517310 1476780 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:19:57.695807 1476780 docker.go:234] disabling docker service ...
	I1002 22:19:57.695877 1476780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:19:57.736150 1476780 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:19:57.757196 1476780 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:19:57.945697 1476780 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:19:58.124229 1476780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:19:58.142178 1476780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:19:58.158482 1476780 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:19:58.158547 1476780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:58.167264 1476780 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:19:58.167334 1476780 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:58.175510 1476780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:58.183531 1476780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:58.192185 1476780 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:19:58.199975 1476780 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:58.209679 1476780 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:58.225078 1476780 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:19:58.236223 1476780 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:19:58.244010 1476780 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:19:58.251292 1476780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:19:58.401245 1476780 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:19:58.610963 1476780 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:19:58.611033 1476780 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:19:58.617142 1476780 start.go:563] Will wait 60s for crictl version
	I1002 22:19:58.617205 1476780 ssh_runner.go:195] Run: which crictl
	I1002 22:19:58.621704 1476780 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:19:58.664701 1476780 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:19:58.664781 1476780 ssh_runner.go:195] Run: crio --version
	I1002 22:19:58.729589 1476780 ssh_runner.go:195] Run: crio --version
	I1002 22:19:58.790101 1476780 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:19:58.793220 1476780 cli_runner.go:164] Run: docker network inspect newest-cni-007061 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:19:58.814208 1476780 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 22:19:58.818447 1476780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:19:58.839888 1476780 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 22:19:57.220262 1477309 cli_runner.go:164] Run: docker network inspect no-preload-975002 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:19:57.240823 1477309 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:19:57.246076 1477309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:19:57.256565 1477309 kubeadm.go:883] updating cluster {Name:no-preload-975002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:19:57.256679 1477309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:19:57.256721 1477309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:19:57.315132 1477309 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:19:57.315153 1477309 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:19:57.315165 1477309 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1002 22:19:57.315256 1477309 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-975002 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:19:57.315339 1477309 ssh_runner.go:195] Run: crio config
	I1002 22:19:57.402945 1477309 cni.go:84] Creating CNI manager for ""
	I1002 22:19:57.403015 1477309 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:19:57.403049 1477309 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 22:19:57.403117 1477309 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-975002 NodeName:no-preload-975002 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:19:57.403302 1477309 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-975002"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:19:57.403420 1477309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:19:57.416298 1477309 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:19:57.416436 1477309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:19:57.427653 1477309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 22:19:57.451259 1477309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:19:57.465436 1477309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1002 22:19:57.482003 1477309 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:19:57.486654 1477309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:19:57.500991 1477309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:19:57.663617 1477309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:19:57.683754 1477309 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002 for IP: 192.168.76.2
	I1002 22:19:57.683785 1477309 certs.go:195] generating shared ca certs ...
	I1002 22:19:57.683804 1477309 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:57.684016 1477309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:19:57.684096 1477309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:19:57.684110 1477309 certs.go:257] generating profile certs ...
	I1002 22:19:57.684227 1477309 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.key
	I1002 22:19:57.684338 1477309 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key.ca172d57
	I1002 22:19:57.684489 1477309 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key
	I1002 22:19:57.684641 1477309 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:19:57.684699 1477309 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:19:57.684713 1477309 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:19:57.684742 1477309 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:19:57.684796 1477309 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:19:57.684826 1477309 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:19:57.684902 1477309 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:19:57.685697 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:19:57.713823 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:19:57.747620 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:19:57.783650 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:19:57.836275 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 22:19:57.911473 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 22:19:57.971604 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:19:57.991519 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:19:58.023502 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:19:58.057629 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:19:58.083081 1477309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:19:58.105486 1477309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:19:58.121806 1477309 ssh_runner.go:195] Run: openssl version
	I1002 22:19:58.132966 1477309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:19:58.146643 1477309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:19:58.151291 1477309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:19:58.151405 1477309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:19:58.201454 1477309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:19:58.210824 1477309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:19:58.220898 1477309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:19:58.225868 1477309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:19:58.225960 1477309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:19:58.273020 1477309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:19:58.281054 1477309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:19:58.293076 1477309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:19:58.302767 1477309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:19:58.302836 1477309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:19:58.353796 1477309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:19:58.362947 1477309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:19:58.367496 1477309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:19:58.458078 1477309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:19:58.519611 1477309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:19:58.608442 1477309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:19:58.680549 1477309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:19:58.841773 1477309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:19:58.929391 1477309 kubeadm.go:400] StartCluster: {Name:no-preload-975002 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-975002 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:19:58.929483 1477309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:19:58.929551 1477309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:19:59.059558 1477309 cri.go:89] found id: "1a8916419f45f0470c9355a6ffd477a3655ab0372337b6555a257fc3e14a31b1"
	I1002 22:19:59.059582 1477309 cri.go:89] found id: "43a71ced78bb7b029ec9c280521a8a4787f1bb401f876ef1e427eb9e5ed28915"
	I1002 22:19:59.059587 1477309 cri.go:89] found id: "6e21bf887b8a4d294f36a9fe682b89471c2cb8e0efa90f34ee105c42e3a4ed56"
	I1002 22:19:59.059591 1477309 cri.go:89] found id: "61c7aad70fed03ac13b24399a7eaea3aa68143333b96d64b5c76e9c67929d289"
	I1002 22:19:59.059594 1477309 cri.go:89] found id: ""
	I1002 22:19:59.059646 1477309 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:19:59.089550 1477309 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:19:59Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:19:59.089632 1477309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:19:59.115187 1477309 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:19:59.115219 1477309 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:19:59.115271 1477309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:19:59.132564 1477309 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:19:59.132970 1477309 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-975002" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:19:59.133075 1477309 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-975002" cluster setting kubeconfig missing "no-preload-975002" context setting]
	I1002 22:19:59.133356 1477309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:59.135045 1477309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:19:59.151412 1477309 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1002 22:19:59.151455 1477309 kubeadm.go:601] duration metric: took 36.228975ms to restartPrimaryControlPlane
	I1002 22:19:59.151464 1477309 kubeadm.go:402] duration metric: took 222.084976ms to StartCluster
	I1002 22:19:59.151479 1477309 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:59.151544 1477309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:19:59.152224 1477309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:59.152462 1477309 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:19:59.152885 1477309 config.go:182] Loaded profile config "no-preload-975002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:19:59.152947 1477309 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:19:59.153021 1477309 addons.go:69] Setting storage-provisioner=true in profile "no-preload-975002"
	I1002 22:19:59.153034 1477309 addons.go:238] Setting addon storage-provisioner=true in "no-preload-975002"
	W1002 22:19:59.153045 1477309 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:19:59.153072 1477309 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:19:59.153951 1477309 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:59.154297 1477309 addons.go:69] Setting dashboard=true in profile "no-preload-975002"
	I1002 22:19:59.154318 1477309 addons.go:238] Setting addon dashboard=true in "no-preload-975002"
	W1002 22:19:59.154325 1477309 addons.go:247] addon dashboard should already be in state true
	I1002 22:19:59.154365 1477309 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:19:59.154670 1477309 addons.go:69] Setting default-storageclass=true in profile "no-preload-975002"
	I1002 22:19:59.154688 1477309 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-975002"
	I1002 22:19:59.154927 1477309 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:59.155853 1477309 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:59.158111 1477309 out.go:179] * Verifying Kubernetes components...
	I1002 22:19:59.161128 1477309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:19:59.202152 1477309 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:19:59.205060 1477309 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:19:59.205078 1477309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:19:59.205144 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:59.210887 1477309 addons.go:238] Setting addon default-storageclass=true in "no-preload-975002"
	W1002 22:19:59.210908 1477309 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:19:59.210932 1477309 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:19:59.211339 1477309 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:19:59.238277 1477309 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:19:59.238384 1477309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:59.244060 1477309 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1002 22:19:59.247837 1477309 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:19:59.247864 1477309 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:19:59.247934 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:59.256687 1477309 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:19:59.256713 1477309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:19:59.256781 1477309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:19:59.302787 1477309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:59.302890 1477309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:19:58.842968 1476780 kubeadm.go:883] updating cluster {Name:newest-cni-007061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:19:58.843108 1476780 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:19:58.843188 1476780 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:19:58.886907 1476780 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:19:58.886927 1476780 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:19:58.886984 1476780 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:19:58.918262 1476780 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:19:58.918282 1476780 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:19:58.918289 1476780 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 22:19:58.918388 1476780 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-007061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:19:58.918467 1476780 ssh_runner.go:195] Run: crio config
	I1002 22:19:59.025282 1476780 cni.go:84] Creating CNI manager for ""
	I1002 22:19:59.025378 1476780 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:19:59.025414 1476780 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 22:19:59.025529 1476780 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-007061 NodeName:newest-cni-007061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:19:59.025717 1476780 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-007061"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:19:59.025845 1476780 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:19:59.036929 1476780 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:19:59.037076 1476780 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:19:59.049305 1476780 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 22:19:59.075693 1476780 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:19:59.091427 1476780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1002 22:19:59.107366 1476780 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:19:59.112800 1476780 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:19:59.122999 1476780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:19:59.395347 1476780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:19:59.433350 1476780 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061 for IP: 192.168.85.2
	I1002 22:19:59.433373 1476780 certs.go:195] generating shared ca certs ...
	I1002 22:19:59.433390 1476780 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:59.433562 1476780 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:19:59.433613 1476780 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:19:59.433624 1476780 certs.go:257] generating profile certs ...
	I1002 22:19:59.433682 1476780 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/client.key
	I1002 22:19:59.433700 1476780 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/client.crt with IP's: []
	I1002 22:19:59.517066 1476780 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/client.crt ...
	I1002 22:19:59.517099 1476780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/client.crt: {Name:mk51d8c350fe2b6691029938da16220b2c871fd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:59.517279 1476780 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/client.key ...
	I1002 22:19:59.517294 1476780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/client.key: {Name:mkf847b511d203cd7c72c2fb0a304a04c72d3776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:59.517377 1476780 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.key.e4a84433
	I1002 22:19:59.517396 1476780 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.crt.e4a84433 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1002 22:19:59.889351 1476780 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.crt.e4a84433 ...
	I1002 22:19:59.889384 1476780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.crt.e4a84433: {Name:mk499bc2c50e56c619b84acee398adea8fe2f9a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:59.889600 1476780 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.key.e4a84433 ...
	I1002 22:19:59.889616 1476780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.key.e4a84433: {Name:mke089ed2b1bd64ff710ab94865bdf32ca7892d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:19:59.889721 1476780 certs.go:382] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.crt.e4a84433 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.crt
	I1002 22:19:59.889813 1476780 certs.go:386] copying /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.key.e4a84433 -> /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.key
	I1002 22:19:59.889875 1476780 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.key
	I1002 22:19:59.889895 1476780 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.crt with IP's: []
	I1002 22:20:00.092067 1476780 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.crt ...
	I1002 22:20:00.092109 1476780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.crt: {Name:mkb0ec335f746548f30429276f4f4989791b45d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:00.092362 1476780 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.key ...
	I1002 22:20:00.092383 1476780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.key: {Name:mkc049dab241f8f3427071d5bb117757fb5b86a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:00.092624 1476780 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:20:00.092678 1476780 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:20:00.092695 1476780 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:20:00.092724 1476780 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:20:00.092753 1476780 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:20:00.092785 1476780 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:20:00.092839 1476780 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:20:00.093662 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:20:00.189705 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:20:00.438778 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:20:00.629179 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:20:00.712019 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 22:20:00.740707 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:20:00.784146 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:20:00.815268 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:20:00.854403 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:20:00.890123 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:20:00.917271 1476780 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:20:00.947597 1476780 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:20:00.975643 1476780 ssh_runner.go:195] Run: openssl version
	I1002 22:20:00.982940 1476780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:20:00.995266 1476780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:20:01.002670 1476780 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:20:01.002750 1476780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:20:01.054409 1476780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:20:01.064838 1476780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:20:01.075028 1476780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:20:01.079553 1476780 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:20:01.079671 1476780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:20:01.122472 1476780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:20:01.132309 1476780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:20:01.142383 1476780 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:20:01.147477 1476780 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:20:01.147602 1476780 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:20:01.236338 1476780 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:20:01.249621 1476780 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:20:01.261107 1476780 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 22:20:01.261227 1476780 kubeadm.go:400] StartCluster: {Name:newest-cni-007061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:20:01.261437 1476780 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:20:01.261541 1476780 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:20:01.318184 1476780 cri.go:89] found id: ""
	I1002 22:20:01.318341 1476780 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:20:01.331061 1476780 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:20:01.340849 1476780 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 22:20:01.340987 1476780 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:20:01.353157 1476780 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 22:20:01.353224 1476780 kubeadm.go:157] found existing configuration files:
	
	I1002 22:20:01.353323 1476780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:20:01.368576 1476780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 22:20:01.368722 1476780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 22:20:01.383618 1476780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:20:01.397349 1476780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 22:20:01.397474 1476780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 22:20:01.406138 1476780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:20:01.415927 1476780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 22:20:01.416064 1476780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 22:20:01.424620 1476780 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:20:01.435426 1476780 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 22:20:01.435540 1476780 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 22:20:01.445065 1476780 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 22:20:01.514303 1476780 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 22:20:01.514750 1476780 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 22:20:01.546719 1476780 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 22:20:01.546844 1476780 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 22:20:01.546914 1476780 kubeadm.go:318] OS: Linux
	I1002 22:20:01.546987 1476780 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 22:20:01.547067 1476780 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 22:20:01.547145 1476780 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 22:20:01.547230 1476780 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 22:20:01.547308 1476780 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 22:20:01.547388 1476780 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 22:20:01.547463 1476780 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 22:20:01.547543 1476780 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 22:20:01.547651 1476780 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 22:20:01.666185 1476780 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 22:20:01.666398 1476780 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 22:20:01.666551 1476780 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 22:20:01.686452 1476780 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 22:20:01.693115 1476780 out.go:252]   - Generating certificates and keys ...
	I1002 22:20:01.693289 1476780 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 22:20:01.693410 1476780 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 22:20:02.658419 1476780 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 22:20:03.382376 1476780 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 22:19:59.639800 1477309 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:19:59.639827 1477309 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:19:59.668932 1477309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:19:59.713309 1477309 node_ready.go:35] waiting up to 6m0s for node "no-preload-975002" to be "Ready" ...
	I1002 22:19:59.721657 1477309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:19:59.731317 1477309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:19:59.811103 1477309 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:19:59.811125 1477309 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:19:59.969136 1477309 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:19:59.969156 1477309 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:20:00.306405 1477309 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:20:00.306430 1477309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:20:00.548005 1477309 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:20:00.548041 1477309 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:20:00.666309 1477309 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:20:00.666335 1477309 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:20:00.732801 1477309 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:20:00.732893 1477309 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:20:00.771941 1477309 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:20:00.771964 1477309 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:20:00.804912 1477309 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:20:00.805001 1477309 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:20:00.832266 1477309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:20:03.752970 1476780 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 22:20:04.529756 1476780 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 22:20:05.206166 1476780 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 22:20:05.210424 1476780 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-007061] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 22:20:05.673074 1476780 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 22:20:05.673914 1476780 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-007061] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1002 22:20:05.844822 1476780 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 22:20:07.414018 1476780 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 22:20:08.461739 1476780 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 22:20:08.463474 1476780 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 22:20:06.610279 1477309 node_ready.go:49] node "no-preload-975002" is "Ready"
	I1002 22:20:06.610307 1477309 node_ready.go:38] duration metric: took 6.896972727s for node "no-preload-975002" to be "Ready" ...
	I1002 22:20:06.610320 1477309 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:20:06.610389 1477309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:20:06.929723 1477309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.20803349s)
	I1002 22:20:09.621775 1477309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.89042886s)
	I1002 22:20:09.786431 1477309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.9540651s)
	I1002 22:20:09.786712 1477309 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.176309471s)
	I1002 22:20:09.786776 1477309 api_server.go:72] duration metric: took 10.634284232s to wait for apiserver process to appear ...
	I1002 22:20:09.786797 1477309 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:20:09.786841 1477309 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:20:09.789864 1477309 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-975002 addons enable metrics-server
	
	I1002 22:20:09.792933 1477309 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1002 22:20:09.025091 1476780 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 22:20:09.229226 1476780 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 22:20:09.770405 1476780 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 22:20:09.891667 1476780 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 22:20:10.806918 1476780 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 22:20:10.807230 1476780 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 22:20:10.810526 1476780 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 22:20:10.814363 1476780 out.go:252]   - Booting up control plane ...
	I1002 22:20:10.814476 1476780 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 22:20:10.814564 1476780 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 22:20:10.816122 1476780 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 22:20:10.844266 1476780 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 22:20:10.844386 1476780 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 22:20:10.853931 1476780 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 22:20:10.854100 1476780 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 22:20:10.854146 1476780 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 22:20:11.006500 1476780 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 22:20:11.006633 1476780 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 22:20:13.012403 1476780 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.00110388s
	I1002 22:20:13.012528 1476780 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 22:20:13.012617 1476780 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1002 22:20:13.012743 1476780 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 22:20:13.012847 1476780 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 22:20:09.795874 1477309 addons.go:514] duration metric: took 10.642881705s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1002 22:20:09.813029 1477309 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1002 22:20:09.814380 1477309 api_server.go:141] control plane version: v1.34.1
	I1002 22:20:09.814404 1477309 api_server.go:131] duration metric: took 27.589377ms to wait for apiserver health ...
	I1002 22:20:09.814414 1477309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:20:09.827007 1477309 system_pods.go:59] 8 kube-system pods found
	I1002 22:20:09.827098 1477309 system_pods.go:61] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:20:09.827120 1477309 system_pods.go:61] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:20:09.827158 1477309 system_pods.go:61] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:20:09.827182 1477309 system_pods.go:61] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:20:09.827202 1477309 system_pods.go:61] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:20:09.827222 1477309 system_pods.go:61] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:20:09.827255 1477309 system_pods.go:61] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:20:09.827281 1477309 system_pods.go:61] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Running
	I1002 22:20:09.827303 1477309 system_pods.go:74] duration metric: took 12.883304ms to wait for pod list to return data ...
	I1002 22:20:09.827337 1477309 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:20:09.835218 1477309 default_sa.go:45] found service account: "default"
	I1002 22:20:09.835294 1477309 default_sa.go:55] duration metric: took 7.931877ms for default service account to be created ...
	I1002 22:20:09.835318 1477309 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:20:09.839163 1477309 system_pods.go:86] 8 kube-system pods found
	I1002 22:20:09.839248 1477309 system_pods.go:89] "coredns-66bc5c9577-rj4bn" [1a57d12e-2a90-4806-b64a-433cef84fcb9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 22:20:09.839270 1477309 system_pods.go:89] "etcd-no-preload-975002" [9821ce05-5f47-4787-a06a-19a9f56a463f] Running
	I1002 22:20:09.839305 1477309 system_pods.go:89] "kindnet-hpq6g" [a626a55d-103d-42f8-8f72-d72089831cc7] Running
	I1002 22:20:09.839332 1477309 system_pods.go:89] "kube-apiserver-no-preload-975002" [eb0d555b-156a-4a40-9e69-be6cdac03553] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:20:09.839353 1477309 system_pods.go:89] "kube-controller-manager-no-preload-975002" [1dd307a1-60be-466d-ba18-e161f43a2d07] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:20:09.839395 1477309 system_pods.go:89] "kube-proxy-lzzt4" [2990596b-be54-41a2-a537-a97040189e3f] Running
	I1002 22:20:09.839420 1477309 system_pods.go:89] "kube-scheduler-no-preload-975002" [1a4d710e-b414-4278-b96e-02ecf85d6196] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:20:09.839439 1477309 system_pods.go:89] "storage-provisioner" [d5c6e3b7-bdd2-4497-aa9c-da91bed71489] Running
	I1002 22:20:09.839474 1477309 system_pods.go:126] duration metric: took 4.135351ms to wait for k8s-apps to be running ...
	I1002 22:20:09.839499 1477309 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:20:09.839585 1477309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:20:09.893962 1477309 system_svc.go:56] duration metric: took 54.454287ms WaitForService to wait for kubelet
	I1002 22:20:09.894053 1477309 kubeadm.go:586] duration metric: took 10.741557593s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:20:09.894089 1477309 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:20:09.901510 1477309 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:20:09.901624 1477309 node_conditions.go:123] node cpu capacity is 2
	I1002 22:20:09.901653 1477309 node_conditions.go:105] duration metric: took 7.532787ms to run NodePressure ...
	I1002 22:20:09.901696 1477309 start.go:241] waiting for startup goroutines ...
	I1002 22:20:09.901716 1477309 start.go:246] waiting for cluster config update ...
	I1002 22:20:09.901752 1477309 start.go:255] writing updated cluster config ...
	I1002 22:20:09.902106 1477309 ssh_runner.go:195] Run: rm -f paused
	I1002 22:20:09.907476 1477309 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:20:09.911839 1477309 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rj4bn" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 22:20:11.917230 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	W1002 22:20:13.920431 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	I1002 22:20:17.624504 1476780 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.612911235s
	W1002 22:20:15.923416 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	W1002 22:20:18.418498 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	I1002 22:20:21.483646 1476780 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.472198459s
	I1002 22:20:22.512539 1476780 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.501308274s
	I1002 22:20:22.536320 1476780 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 22:20:22.561062 1476780 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 22:20:22.579573 1476780 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 22:20:22.580080 1476780 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-007061 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 22:20:22.596634 1476780 kubeadm.go:318] [bootstrap-token] Using token: xi8n1f.dx400ui22rqyh2tg
	I1002 22:20:22.599643 1476780 out.go:252]   - Configuring RBAC rules ...
	I1002 22:20:22.599774 1476780 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 22:20:22.605609 1476780 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 22:20:22.624141 1476780 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 22:20:22.636315 1476780 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 22:20:22.649499 1476780 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 22:20:22.661080 1476780 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 22:20:22.920773 1476780 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 22:20:23.405622 1476780 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 22:20:23.937030 1476780 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 22:20:23.938452 1476780 kubeadm.go:318] 
	I1002 22:20:23.938538 1476780 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 22:20:23.938549 1476780 kubeadm.go:318] 
	I1002 22:20:23.938631 1476780 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 22:20:23.938645 1476780 kubeadm.go:318] 
	I1002 22:20:23.938673 1476780 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 22:20:23.938746 1476780 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 22:20:23.938804 1476780 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 22:20:23.938812 1476780 kubeadm.go:318] 
	I1002 22:20:23.938875 1476780 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 22:20:23.938887 1476780 kubeadm.go:318] 
	I1002 22:20:23.938938 1476780 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 22:20:23.938947 1476780 kubeadm.go:318] 
	I1002 22:20:23.939001 1476780 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 22:20:23.939084 1476780 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 22:20:23.939160 1476780 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 22:20:23.939169 1476780 kubeadm.go:318] 
	I1002 22:20:23.939260 1476780 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 22:20:23.939344 1476780 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 22:20:23.939352 1476780 kubeadm.go:318] 
	I1002 22:20:23.939441 1476780 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token xi8n1f.dx400ui22rqyh2tg \
	I1002 22:20:23.939554 1476780 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 \
	I1002 22:20:23.939579 1476780 kubeadm.go:318] 	--control-plane 
	I1002 22:20:23.939590 1476780 kubeadm.go:318] 
	I1002 22:20:23.939679 1476780 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 22:20:23.939687 1476780 kubeadm.go:318] 
	I1002 22:20:23.939774 1476780 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token xi8n1f.dx400ui22rqyh2tg \
	I1002 22:20:23.939893 1476780 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c5cbf754d8a1ac29c51617345bb5e81bd1b57fcdb59425c15c0fb46ce483c996 
	I1002 22:20:23.945096 1476780 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 22:20:23.945362 1476780 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 22:20:23.945484 1476780 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 22:20:23.945543 1476780 cni.go:84] Creating CNI manager for ""
	I1002 22:20:23.945562 1476780 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:20:23.951504 1476780 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1002 22:20:20.424579 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	W1002 22:20:22.918727 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	I1002 22:20:23.955949 1476780 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:20:23.964949 1476780 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 22:20:23.964981 1476780 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 22:20:24.016558 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:20:24.850178 1476780 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:20:24.850309 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:20:24.850386 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-007061 minikube.k8s.io/updated_at=2025_10_02T22_20_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4 minikube.k8s.io/name=newest-cni-007061 minikube.k8s.io/primary=true
	I1002 22:20:25.360838 1476780 ops.go:34] apiserver oom_adj: -16
	I1002 22:20:25.360960 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:20:25.861346 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:20:26.361762 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:20:26.861546 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:20:27.361067 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:20:27.861310 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:20:28.361753 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:20:28.861407 1476780 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 22:20:29.091640 1476780 kubeadm.go:1113] duration metric: took 4.241372381s to wait for elevateKubeSystemPrivileges
	I1002 22:20:29.091666 1476780 kubeadm.go:402] duration metric: took 27.830444693s to StartCluster
	I1002 22:20:29.091682 1476780 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:29.091741 1476780 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:20:29.092711 1476780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:29.092925 1476780 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:20:29.093058 1476780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:20:29.093331 1476780 config.go:182] Loaded profile config "newest-cni-007061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:29.093429 1476780 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:20:29.093488 1476780 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-007061"
	I1002 22:20:29.093504 1476780 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-007061"
	I1002 22:20:29.093526 1476780 host.go:66] Checking if "newest-cni-007061" exists ...
	I1002 22:20:29.094133 1476780 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:29.094463 1476780 addons.go:69] Setting default-storageclass=true in profile "newest-cni-007061"
	I1002 22:20:29.094482 1476780 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-007061"
	I1002 22:20:29.094740 1476780 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:29.097687 1476780 out.go:179] * Verifying Kubernetes components...
	I1002 22:20:29.104962 1476780 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:20:29.128144 1476780 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1002 22:20:24.925124 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	W1002 22:20:26.939037 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	W1002 22:20:29.418164 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	I1002 22:20:29.131662 1476780 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:20:29.131684 1476780 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:20:29.131752 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:29.138947 1476780 addons.go:238] Setting addon default-storageclass=true in "newest-cni-007061"
	I1002 22:20:29.138992 1476780 host.go:66] Checking if "newest-cni-007061" exists ...
	I1002 22:20:29.139413 1476780 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:29.177046 1476780 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:20:29.177067 1476780 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:20:29.177131 1476780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:29.180438 1476780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34586 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:29.210076 1476780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34586 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:29.435693 1476780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:20:29.445756 1476780 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:20:29.451254 1476780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 22:20:29.465441 1476780 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:20:30.210853 1476780 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1002 22:20:30.212800 1476780 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:20:30.212866 1476780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:20:30.239844 1476780 api_server.go:72] duration metric: took 1.146891147s to wait for apiserver process to appear ...
	I1002 22:20:30.239952 1476780 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:20:30.239973 1476780 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:20:30.259235 1476780 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:20:30.261114 1476780 api_server.go:141] control plane version: v1.34.1
	I1002 22:20:30.261150 1476780 api_server.go:131] duration metric: took 21.190648ms to wait for apiserver health ...
	I1002 22:20:30.261160 1476780 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:20:30.266017 1476780 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 22:20:30.268981 1476780 addons.go:514] duration metric: took 1.175512381s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 22:20:30.277458 1476780 system_pods.go:59] 9 kube-system pods found
	I1002 22:20:30.277497 1476780 system_pods.go:61] "coredns-66bc5c9577-2dqxn" [348992da-d61c-41e7-83ec-154b31fd46ff] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 22:20:30.277505 1476780 system_pods.go:61] "coredns-66bc5c9577-h7pp8" [c67b11a7-df6e-47e6-ad4f-d31506dd89b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 22:20:30.277512 1476780 system_pods.go:61] "etcd-newest-cni-007061" [d0ab93e4-ccf4-4b37-9203-848cd4c28976] Running
	I1002 22:20:30.277517 1476780 system_pods.go:61] "kindnet-h2fvd" [b797cb62-e31f-4e5b-825b-81189902db5f] Running
	I1002 22:20:30.277523 1476780 system_pods.go:61] "kube-apiserver-newest-cni-007061" [6aec1d67-1e4b-4386-b8a8-4ff00284349f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:20:30.277528 1476780 system_pods.go:61] "kube-controller-manager-newest-cni-007061" [d00bdb1f-2170-4f7e-815a-5deb837a0264] Running
	I1002 22:20:30.277532 1476780 system_pods.go:61] "kube-proxy-m892s" [1995a215-d278-4c89-b447-26a58362aab5] Running
	I1002 22:20:30.277539 1476780 system_pods.go:61] "kube-scheduler-newest-cni-007061" [c7a188d8-ffeb-4928-91de-f1bcd7a14aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:20:30.277547 1476780 system_pods.go:61] "storage-provisioner" [063bf5bf-9b28-43fe-9f9e-f76c3dc4bd44] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 22:20:30.277553 1476780 system_pods.go:74] duration metric: took 16.388389ms to wait for pod list to return data ...
	I1002 22:20:30.277566 1476780 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:20:30.287781 1476780 default_sa.go:45] found service account: "default"
	I1002 22:20:30.287820 1476780 default_sa.go:55] duration metric: took 10.247331ms for default service account to be created ...
	I1002 22:20:30.287835 1476780 kubeadm.go:586] duration metric: took 1.194887104s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 22:20:30.287852 1476780 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:20:30.298550 1476780 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:20:30.298585 1476780 node_conditions.go:123] node cpu capacity is 2
	I1002 22:20:30.298598 1476780 node_conditions.go:105] duration metric: took 10.740803ms to run NodePressure ...
	I1002 22:20:30.298611 1476780 start.go:241] waiting for startup goroutines ...
	I1002 22:20:30.714455 1476780 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-007061" context rescaled to 1 replicas
	I1002 22:20:30.714501 1476780 start.go:246] waiting for cluster config update ...
	I1002 22:20:30.714515 1476780 start.go:255] writing updated cluster config ...
	I1002 22:20:30.714830 1476780 ssh_runner.go:195] Run: rm -f paused
	I1002 22:20:30.782970 1476780 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:20:30.788269 1476780 out.go:179] * Done! kubectl is now configured to use "newest-cni-007061" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.861815225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.869825074Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=27482f18-4045-4832-acdd-8778c22c2c72 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.880721567Z" level=info msg="Ran pod sandbox 936fdd45fa42551002eaa489e91e716bf70396a84aae4b11ccdbbbc4f26fcf15 with infra container: kube-system/kindnet-h2fvd/POD" id=27482f18-4045-4832-acdd-8778c22c2c72 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.881072379Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-m892s/POD" id=8324c51d-515b-4eed-8818-37c89dd92460 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.881127574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.891948974Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8324c51d-515b-4eed-8818-37c89dd92460 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.895224287Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=69e4d342-98a9-4fbf-bc16-49457429c6b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.901552164Z" level=info msg="Ran pod sandbox 5ecfdd0e1ea196ce91305ae33d49444f60ece2c54eafd8974747ff40c85ee182 with infra container: kube-system/kube-proxy-m892s/POD" id=8324c51d-515b-4eed-8818-37c89dd92460 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.90702336Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b2d65baf-5c35-4439-8da5-b9814d630c71 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.91018542Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=06ed1462-22dc-41f9-bd27-3f1254428d71 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.918488638Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b3a5427e-dbf4-4501-8f3e-a9c0d98e9e21 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.920102393Z" level=info msg="Creating container: kube-system/kindnet-h2fvd/kindnet-cni" id=5a72a701-0269-44e2-b792-6e25b9a9e961 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.92037607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.925351381Z" level=info msg="Creating container: kube-system/kube-proxy-m892s/kube-proxy" id=8eaa8a20-f1d0-4c6d-be09-a3082e6a7561 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.939121931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.939690421Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.942597043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.955260281Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.955933934Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.967520244Z" level=info msg="Created container 3f00a2ab1edd5f2f3c29b747c8708182ead34900e7a38e9872c740fea51c87cc: kube-system/kindnet-h2fvd/kindnet-cni" id=5a72a701-0269-44e2-b792-6e25b9a9e961 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.96967872Z" level=info msg="Starting container: 3f00a2ab1edd5f2f3c29b747c8708182ead34900e7a38e9872c740fea51c87cc" id=af04d4ec-ebb5-44ba-bf35-647ced5b1ac8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:20:28 newest-cni-007061 crio[839]: time="2025-10-02T22:20:28.978416474Z" level=info msg="Started container" PID=1414 containerID=3f00a2ab1edd5f2f3c29b747c8708182ead34900e7a38e9872c740fea51c87cc description=kube-system/kindnet-h2fvd/kindnet-cni id=af04d4ec-ebb5-44ba-bf35-647ced5b1ac8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=936fdd45fa42551002eaa489e91e716bf70396a84aae4b11ccdbbbc4f26fcf15
	Oct 02 22:20:29 newest-cni-007061 crio[839]: time="2025-10-02T22:20:29.002191913Z" level=info msg="Created container 3fb44f48aa999c1274a437f5972a94e30fdff06e88c96b8e9d840f8b88708356: kube-system/kube-proxy-m892s/kube-proxy" id=8eaa8a20-f1d0-4c6d-be09-a3082e6a7561 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:29 newest-cni-007061 crio[839]: time="2025-10-02T22:20:29.003849245Z" level=info msg="Starting container: 3fb44f48aa999c1274a437f5972a94e30fdff06e88c96b8e9d840f8b88708356" id=eb263c1d-d726-4e1a-9153-0827f44b8b5f name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:20:29 newest-cni-007061 crio[839]: time="2025-10-02T22:20:29.013651094Z" level=info msg="Started container" PID=1419 containerID=3fb44f48aa999c1274a437f5972a94e30fdff06e88c96b8e9d840f8b88708356 description=kube-system/kube-proxy-m892s/kube-proxy id=eb263c1d-d726-4e1a-9153-0827f44b8b5f name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ecfdd0e1ea196ce91305ae33d49444f60ece2c54eafd8974747ff40c85ee182
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3fb44f48aa999       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 seconds ago       Running             kube-proxy                0                   5ecfdd0e1ea19       kube-proxy-m892s                            kube-system
	3f00a2ab1edd5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               0                   936fdd45fa425       kindnet-h2fvd                               kube-system
	a8086bc9dfbfa       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            0                   fe94997e19d53       kube-apiserver-newest-cni-007061            kube-system
	da40870e1b151       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            0                   823b8515bfc06       kube-scheduler-newest-cni-007061            kube-system
	6c4cefc5b3cee       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      0                   8b4e134d36a68       etcd-newest-cni-007061                      kube-system
	a45c171b4b315       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   0                   5726c386e2a6b       kube-controller-manager-newest-cni-007061   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-007061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-007061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=newest-cni-007061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_20_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:20:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-007061
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:20:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:20:24 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:20:24 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:20:24 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 22:20:24 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-007061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3057ca255504163899b8fb0d866765e
	  System UUID:                dd20f051-28eb-4702-9fc4-2f1d38d2bf49
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-007061                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10s
	  kube-system                 kindnet-h2fvd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-007061             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-007061    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-m892s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-007061             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node newest-cni-007061 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node newest-cni-007061 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x8 over 20s)  kubelet          Node newest-cni-007061 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-007061 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-007061 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-007061 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-007061 event: Registered Node newest-cni-007061 in Controller
	
	
	==> dmesg <==
	[ +16.398632] overlayfs: idmapped layers are currently not supported
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:18] overlayfs: idmapped layers are currently not supported
	[ +12.782696] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6c4cefc5b3ceea29e2d7e870178d0ac4a5220d7098f33b804e37e07337408020] <==
	{"level":"warn","ts":"2025-10-02T22:20:17.623818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:17.648589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:17.685508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:17.700165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:17.733832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:17.769772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:17.794586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:17.863110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:17.922147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:17.957923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.022924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.047566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.081057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.097489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.172909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.174946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.194468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.234792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.254247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.293094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.327133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.361406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.385825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.419504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:18.573730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49700","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:20:32 up  7:02,  0 user,  load average: 5.62, 3.91, 2.74
	Linux newest-cni-007061 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3f00a2ab1edd5f2f3c29b747c8708182ead34900e7a38e9872c740fea51c87cc] <==
	I1002 22:20:29.117439       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:20:29.117698       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 22:20:29.117825       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:20:29.117838       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:20:29.117849       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:20:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:20:29.306155       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:20:29.306172       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:20:29.306181       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:20:29.306941       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [a8086bc9dfbfa5779a360f120b1983b145f7c6fef1dbbc0363db4132de36bf08] <==
	I1002 22:20:19.998295       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:20:19.998329       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:20:20.001270       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 22:20:20.005800       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 22:20:20.064361       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1002 22:20:20.066618       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:20:20.086359       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:20:20.087858       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:20:20.508175       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 22:20:20.519165       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 22:20:20.519192       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:20:21.892673       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:20:21.979479       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:20:22.086859       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 22:20:22.097856       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1002 22:20:22.100332       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 22:20:22.108771       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 22:20:22.645497       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:20:23.353164       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:20:23.404639       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 22:20:23.441341       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 22:20:28.521498       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1002 22:20:28.677825       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:20:28.683710       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:20:28.721996       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a45c171b4b31538395ff1207b145d2eb1d755396f1a0bc98ead428467a1efa46] <==
	I1002 22:20:27.642605       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:20:27.644693       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:20:27.651913       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:20:27.651998       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 22:20:27.651922       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 22:20:27.652118       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 22:20:27.652108       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 22:20:27.652275       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 22:20:27.652348       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 22:20:27.652387       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 22:20:27.652392       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 22:20:27.652397       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 22:20:27.654492       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:20:27.660941       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-007061" podCIDRs=["10.42.0.0/24"]
	I1002 22:20:27.665681       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:20:27.665704       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:20:27.665712       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 22:20:27.666121       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 22:20:27.667293       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 22:20:27.667316       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 22:20:27.667778       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 22:20:27.668425       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1002 22:20:27.675679       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 22:20:27.678970       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 22:20:27.681215       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-proxy [3fb44f48aa999c1274a437f5972a94e30fdff06e88c96b8e9d840f8b88708356] <==
	I1002 22:20:29.447623       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:20:29.555740       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:20:29.656660       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:20:29.656700       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 22:20:29.656783       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:20:29.716922       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:20:29.716973       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:20:29.741981       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:20:29.742305       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:20:29.742331       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:20:29.743416       1 config.go:200] "Starting service config controller"
	I1002 22:20:29.743436       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:20:29.744629       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:20:29.744641       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:20:29.744660       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:20:29.744664       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:20:29.744919       1 config.go:309] "Starting node config controller"
	I1002 22:20:29.744928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:20:29.845575       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:20:29.845613       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 22:20:29.845683       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:20:29.845695       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [da40870e1b151b0ae83333fb19388c730d593c6f658560ffd57ef4f76a65aea1] <==
	I1002 22:20:21.428690       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:20:21.442521       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:20:21.443307       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:21.443677       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:21.443755       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 22:20:21.468812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 22:20:21.470618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 22:20:21.471575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 22:20:21.472668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 22:20:21.479016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 22:20:21.473183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 22:20:21.479384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 22:20:21.479723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 22:20:21.480080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 22:20:21.480273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 22:20:21.480315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 22:20:21.480361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 22:20:21.480410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 22:20:21.480441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 22:20:21.480697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 22:20:21.481022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 22:20:21.484872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 22:20:21.486182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 22:20:21.491881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1002 22:20:22.644409       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.103993    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6dd43616f3e32e1609e53b83ae785574-ca-certs\") pod \"kube-controller-manager-newest-cni-007061\" (UID: \"6dd43616f3e32e1609e53b83ae785574\") " pod="kube-system/kube-controller-manager-newest-cni-007061"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.104012    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a930c47817413016e1ac0d2c6cdcdd7e-kubeconfig\") pod \"kube-scheduler-newest-cni-007061\" (UID: \"a930c47817413016e1ac0d2c6cdcdd7e\") " pod="kube-system/kube-scheduler-newest-cni-007061"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.104039    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/500ecf714ff1476de5d8a5b73450c34b-ca-certs\") pod \"kube-apiserver-newest-cni-007061\" (UID: \"500ecf714ff1476de5d8a5b73450c34b\") " pod="kube-system/kube-apiserver-newest-cni-007061"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.104074    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/500ecf714ff1476de5d8a5b73450c34b-k8s-certs\") pod \"kube-apiserver-newest-cni-007061\" (UID: \"500ecf714ff1476de5d8a5b73450c34b\") " pod="kube-system/kube-apiserver-newest-cni-007061"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.104099    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/500ecf714ff1476de5d8a5b73450c34b-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-007061\" (UID: \"500ecf714ff1476de5d8a5b73450c34b\") " pod="kube-system/kube-apiserver-newest-cni-007061"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.120280    1302 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-007061"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.120385    1302 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-007061"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.491013    1302 apiserver.go:52] "Watching apiserver"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.587713    1302 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.667120    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-007061" podStartSLOduration=1.66709782 podStartE2EDuration="1.66709782s" podCreationTimestamp="2025-10-02 22:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:20:24.635879078 +0000 UTC m=+1.402058260" watchObservedRunningTime="2025-10-02 22:20:24.66709782 +0000 UTC m=+1.433277002"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.721773    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-007061" podStartSLOduration=0.721751471 podStartE2EDuration="721.751471ms" podCreationTimestamp="2025-10-02 22:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:20:24.696347868 +0000 UTC m=+1.462527050" watchObservedRunningTime="2025-10-02 22:20:24.721751471 +0000 UTC m=+1.487930661"
	Oct 02 22:20:24 newest-cni-007061 kubelet[1302]: I1002 22:20:24.746097    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-007061" podStartSLOduration=0.746074764 podStartE2EDuration="746.074764ms" podCreationTimestamp="2025-10-02 22:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:20:24.722220278 +0000 UTC m=+1.488399485" watchObservedRunningTime="2025-10-02 22:20:24.746074764 +0000 UTC m=+1.512254069"
	Oct 02 22:20:27 newest-cni-007061 kubelet[1302]: I1002 22:20:27.729602    1302 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 02 22:20:27 newest-cni-007061 kubelet[1302]: I1002 22:20:27.738150    1302 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 02 22:20:28 newest-cni-007061 kubelet[1302]: I1002 22:20:28.662545    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b797cb62-e31f-4e5b-825b-81189902db5f-xtables-lock\") pod \"kindnet-h2fvd\" (UID: \"b797cb62-e31f-4e5b-825b-81189902db5f\") " pod="kube-system/kindnet-h2fvd"
	Oct 02 22:20:28 newest-cni-007061 kubelet[1302]: I1002 22:20:28.662602    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1995a215-d278-4c89-b447-26a58362aab5-lib-modules\") pod \"kube-proxy-m892s\" (UID: \"1995a215-d278-4c89-b447-26a58362aab5\") " pod="kube-system/kube-proxy-m892s"
	Oct 02 22:20:28 newest-cni-007061 kubelet[1302]: I1002 22:20:28.662640    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1995a215-d278-4c89-b447-26a58362aab5-xtables-lock\") pod \"kube-proxy-m892s\" (UID: \"1995a215-d278-4c89-b447-26a58362aab5\") " pod="kube-system/kube-proxy-m892s"
	Oct 02 22:20:28 newest-cni-007061 kubelet[1302]: I1002 22:20:28.662670    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-256sh\" (UniqueName: \"kubernetes.io/projected/1995a215-d278-4c89-b447-26a58362aab5-kube-api-access-256sh\") pod \"kube-proxy-m892s\" (UID: \"1995a215-d278-4c89-b447-26a58362aab5\") " pod="kube-system/kube-proxy-m892s"
	Oct 02 22:20:28 newest-cni-007061 kubelet[1302]: I1002 22:20:28.662692    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b797cb62-e31f-4e5b-825b-81189902db5f-cni-cfg\") pod \"kindnet-h2fvd\" (UID: \"b797cb62-e31f-4e5b-825b-81189902db5f\") " pod="kube-system/kindnet-h2fvd"
	Oct 02 22:20:28 newest-cni-007061 kubelet[1302]: I1002 22:20:28.662714    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmhwn\" (UniqueName: \"kubernetes.io/projected/b797cb62-e31f-4e5b-825b-81189902db5f-kube-api-access-bmhwn\") pod \"kindnet-h2fvd\" (UID: \"b797cb62-e31f-4e5b-825b-81189902db5f\") " pod="kube-system/kindnet-h2fvd"
	Oct 02 22:20:28 newest-cni-007061 kubelet[1302]: I1002 22:20:28.662743    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1995a215-d278-4c89-b447-26a58362aab5-kube-proxy\") pod \"kube-proxy-m892s\" (UID: \"1995a215-d278-4c89-b447-26a58362aab5\") " pod="kube-system/kube-proxy-m892s"
	Oct 02 22:20:28 newest-cni-007061 kubelet[1302]: I1002 22:20:28.662766    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b797cb62-e31f-4e5b-825b-81189902db5f-lib-modules\") pod \"kindnet-h2fvd\" (UID: \"b797cb62-e31f-4e5b-825b-81189902db5f\") " pod="kube-system/kindnet-h2fvd"
	Oct 02 22:20:28 newest-cni-007061 kubelet[1302]: I1002 22:20:28.773856    1302 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 22:20:29 newest-cni-007061 kubelet[1302]: I1002 22:20:29.196342    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-h2fvd" podStartSLOduration=1.196324379 podStartE2EDuration="1.196324379s" podCreationTimestamp="2025-10-02 22:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:20:29.052225847 +0000 UTC m=+5.818405037" watchObservedRunningTime="2025-10-02 22:20:29.196324379 +0000 UTC m=+5.962503569"
	Oct 02 22:20:30 newest-cni-007061 kubelet[1302]: I1002 22:20:30.092775    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m892s" podStartSLOduration=2.092753239 podStartE2EDuration="2.092753239s" podCreationTimestamp="2025-10-02 22:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 22:20:30.02314772 +0000 UTC m=+6.789326926" watchObservedRunningTime="2025-10-02 22:20:30.092753239 +0000 UTC m=+6.858932421"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-007061 -n newest-cni-007061
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-007061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h7pp8 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-007061 describe pod coredns-66bc5c9577-h7pp8 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-007061 describe pod coredns-66bc5c9577-h7pp8 storage-provisioner: exit status 1 (85.246516ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h7pp8" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-007061 describe pod coredns-66bc5c9577-h7pp8 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-007061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-007061 --alsologtostderr -v=1: exit status 80 (1.910026899s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-007061 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:20:51.297695 1483843 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:20:51.297824 1483843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:51.297830 1483843 out.go:374] Setting ErrFile to fd 2...
	I1002 22:20:51.297836 1483843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:51.298218 1483843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:20:51.298521 1483843 out.go:368] Setting JSON to false
	I1002 22:20:51.298540 1483843 mustload.go:65] Loading cluster: newest-cni-007061
	I1002 22:20:51.299239 1483843 config.go:182] Loaded profile config "newest-cni-007061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:51.300053 1483843 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:51.322024 1483843 host.go:66] Checking if "newest-cni-007061" exists ...
	I1002 22:20:51.322406 1483843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:20:51.399904 1483843 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 22:20:51.388973918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:20:51.400686 1483843 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-007061 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 22:20:51.406535 1483843 out.go:179] * Pausing node newest-cni-007061 ... 
	I1002 22:20:51.409609 1483843 host.go:66] Checking if "newest-cni-007061" exists ...
	I1002 22:20:51.409966 1483843 ssh_runner.go:195] Run: systemctl --version
	I1002 22:20:51.410116 1483843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:51.430123 1483843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:51.533123 1483843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:20:51.549022 1483843 pause.go:51] kubelet running: true
	I1002 22:20:51.549096 1483843 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:20:51.813724 1483843 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:20:51.813872 1483843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:20:51.893590 1483843 cri.go:89] found id: "fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71"
	I1002 22:20:51.893617 1483843 cri.go:89] found id: "4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be"
	I1002 22:20:51.893622 1483843 cri.go:89] found id: "0f7ed4457e4dd510d818c61d551390c715d5526db66159ff7ebad267e8eeae6c"
	I1002 22:20:51.893625 1483843 cri.go:89] found id: "2a391833bd371b962840f5da1a5dd64f92ab4d26ed844dd8d9839166a6995da0"
	I1002 22:20:51.893629 1483843 cri.go:89] found id: "1647dd59c85ece18e477c3ed89cf0b1d6fdff809cbd56718dcfcb97b915b83e4"
	I1002 22:20:51.893633 1483843 cri.go:89] found id: "b1257b68a4678d0075bb779bbc3dab7c4d2a1f18219e8d82946226a0adaa9b98"
	I1002 22:20:51.893636 1483843 cri.go:89] found id: ""
	I1002 22:20:51.893686 1483843 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:20:51.907886 1483843 retry.go:31] will retry after 194.017519ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:20:51Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:20:52.102181 1483843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:20:52.119572 1483843 pause.go:51] kubelet running: false
	I1002 22:20:52.119725 1483843 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:20:52.285089 1483843 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:20:52.285247 1483843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:20:52.378498 1483843 cri.go:89] found id: "fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71"
	I1002 22:20:52.378517 1483843 cri.go:89] found id: "4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be"
	I1002 22:20:52.378522 1483843 cri.go:89] found id: "0f7ed4457e4dd510d818c61d551390c715d5526db66159ff7ebad267e8eeae6c"
	I1002 22:20:52.378526 1483843 cri.go:89] found id: "2a391833bd371b962840f5da1a5dd64f92ab4d26ed844dd8d9839166a6995da0"
	I1002 22:20:52.378530 1483843 cri.go:89] found id: "1647dd59c85ece18e477c3ed89cf0b1d6fdff809cbd56718dcfcb97b915b83e4"
	I1002 22:20:52.378534 1483843 cri.go:89] found id: "b1257b68a4678d0075bb779bbc3dab7c4d2a1f18219e8d82946226a0adaa9b98"
	I1002 22:20:52.378537 1483843 cri.go:89] found id: ""
	I1002 22:20:52.378609 1483843 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:20:52.390007 1483843 retry.go:31] will retry after 504.472907ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:20:52Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:20:52.894796 1483843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:20:52.907892 1483843 pause.go:51] kubelet running: false
	I1002 22:20:52.908017 1483843 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:20:53.049003 1483843 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:20:53.049121 1483843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:20:53.124856 1483843 cri.go:89] found id: "fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71"
	I1002 22:20:53.124879 1483843 cri.go:89] found id: "4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be"
	I1002 22:20:53.124883 1483843 cri.go:89] found id: "0f7ed4457e4dd510d818c61d551390c715d5526db66159ff7ebad267e8eeae6c"
	I1002 22:20:53.124887 1483843 cri.go:89] found id: "2a391833bd371b962840f5da1a5dd64f92ab4d26ed844dd8d9839166a6995da0"
	I1002 22:20:53.124890 1483843 cri.go:89] found id: "1647dd59c85ece18e477c3ed89cf0b1d6fdff809cbd56718dcfcb97b915b83e4"
	I1002 22:20:53.124893 1483843 cri.go:89] found id: "b1257b68a4678d0075bb779bbc3dab7c4d2a1f18219e8d82946226a0adaa9b98"
	I1002 22:20:53.124896 1483843 cri.go:89] found id: ""
	I1002 22:20:53.124947 1483843 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:20:53.139616 1483843 out.go:203] 
	W1002 22:20:53.142617 1483843 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:20:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:20:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 22:20:53.142689 1483843 out.go:285] * 
	* 
	W1002 22:20:53.152470 1483843 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:20:53.155342 1483843 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-007061 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-007061
helpers_test.go:243: (dbg) docker inspect newest-cni-007061:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01",
	        "Created": "2025-10-02T22:19:49.208440001Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1482075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:20:35.141400385Z",
	            "FinishedAt": "2025-10-02T22:20:34.323116521Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/hostname",
	        "HostsPath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/hosts",
	        "LogPath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01-json.log",
	        "Name": "/newest-cni-007061",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-007061:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-007061",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01",
	                "LowerDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-007061",
	                "Source": "/var/lib/docker/volumes/newest-cni-007061/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-007061",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-007061",
	                "name.minikube.sigs.k8s.io": "newest-cni-007061",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "716d49888bf3aedede5d8a2d2f1c0e2ece12dd495e463487a2bba96829338302",
	            "SandboxKey": "/var/run/docker/netns/716d49888bf3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34596"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34597"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34600"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34598"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34599"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-007061": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:37:33:b4:03:70",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fe85bc902fc6ddde3be87025823d1d70984e1f5f4e60ca56b5f7626fbe228993",
	                    "EndpointID": "b4dff446422461d59395166b6c07cbf2ca90468e10c7a02d786d448534293130",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-007061",
	                        "3375b860c995"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-007061 -n newest-cni-007061
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-007061 -n newest-cni-007061: exit status 2 (345.541588ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-007061 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-007061 logs -n 25: (1.102667917s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ default-k8s-diff-port-230628 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ pause   │ -p default-k8s-diff-port-230628 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-080134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ delete  │ -p disable-driver-mounts-607037                                                                                                                                                                                                               │ disable-driver-mounts-607037 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ stop    │ -p embed-certs-080134 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable dashboard -p embed-certs-080134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ image   │ embed-certs-080134 image list --format=json                                                                                                                                                                                                   │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ pause   │ -p embed-certs-080134 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ stop    │ -p no-preload-975002 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ delete  │ -p embed-certs-080134                                                                                                                                                                                                                         │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ delete  │ -p embed-certs-080134                                                                                                                                                                                                                         │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ start   │ -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable dashboard -p no-preload-975002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-007061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	│ stop    │ -p newest-cni-007061 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-007061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ start   │ -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ image   │ newest-cni-007061 image list --format=json                                                                                                                                                                                                    │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ pause   │ -p newest-cni-007061 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:20:34
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:20:34.851248 1481947 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:20:34.851400 1481947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:34.851413 1481947 out.go:374] Setting ErrFile to fd 2...
	I1002 22:20:34.851418 1481947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:34.851742 1481947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:20:34.852722 1481947 out.go:368] Setting JSON to false
	I1002 22:20:34.853711 1481947 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25360,"bootTime":1759418275,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:20:34.853784 1481947 start.go:140] virtualization:  
	I1002 22:20:34.859026 1481947 out.go:179] * [newest-cni-007061] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:20:34.862141 1481947 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:20:34.862187 1481947 notify.go:220] Checking for updates...
	I1002 22:20:34.866439 1481947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:20:34.869338 1481947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:20:34.872249 1481947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:20:34.875374 1481947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:20:34.878359 1481947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:20:34.881773 1481947 config.go:182] Loaded profile config "newest-cni-007061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:34.882480 1481947 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:20:34.912894 1481947 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:20:34.913065 1481947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:20:34.970933 1481947 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:20:34.961495552 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:20:34.971047 1481947 docker.go:318] overlay module found
	I1002 22:20:34.974236 1481947 out.go:179] * Using the docker driver based on existing profile
	I1002 22:20:34.976979 1481947 start.go:304] selected driver: docker
	I1002 22:20:34.976998 1481947 start.go:924] validating driver "docker" against &{Name:newest-cni-007061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:20:34.977101 1481947 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:20:34.977848 1481947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:20:35.038882 1481947 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:20:35.027831083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:20:35.039287 1481947 start_flags.go:1021] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 22:20:35.039325 1481947 cni.go:84] Creating CNI manager for ""
	I1002 22:20:35.039388 1481947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:20:35.039463 1481947 start.go:348] cluster config:
	{Name:newest-cni-007061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:20:35.042751 1481947 out.go:179] * Starting "newest-cni-007061" primary control-plane node in "newest-cni-007061" cluster
	I1002 22:20:35.045659 1481947 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:20:35.048689 1481947 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:20:35.051684 1481947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:20:35.051764 1481947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:20:35.051862 1481947 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:20:35.051874 1481947 cache.go:58] Caching tarball of preloaded images
	I1002 22:20:35.052169 1481947 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:20:35.052194 1481947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:20:35.052330 1481947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/config.json ...
	I1002 22:20:35.080926 1481947 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:20:35.080952 1481947 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:20:35.080971 1481947 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:20:35.081000 1481947 start.go:360] acquireMachinesLock for newest-cni-007061: {Name:mk07ea86d3b6a688131669b97ec51445de367e54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:20:35.081074 1481947 start.go:364] duration metric: took 50.214µs to acquireMachinesLock for "newest-cni-007061"
	I1002 22:20:35.081138 1481947 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:20:35.081161 1481947 fix.go:54] fixHost starting: 
	I1002 22:20:35.081473 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:35.102275 1481947 fix.go:112] recreateIfNeeded on newest-cni-007061: state=Stopped err=<nil>
	W1002 22:20:35.102330 1481947 fix.go:138] unexpected machine state, will restart: <nil>
	W1002 22:20:36.417059 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	W1002 22:20:38.417850 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	I1002 22:20:35.105612 1481947 out.go:252] * Restarting existing docker container for "newest-cni-007061" ...
	I1002 22:20:35.105722 1481947 cli_runner.go:164] Run: docker start newest-cni-007061
	I1002 22:20:35.384158 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:35.403177 1481947 kic.go:430] container "newest-cni-007061" state is running.
	I1002 22:20:35.403569 1481947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-007061
	I1002 22:20:35.428143 1481947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/config.json ...
	I1002 22:20:35.428377 1481947 machine.go:93] provisionDockerMachine start ...
	I1002 22:20:35.428443 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:35.453359 1481947 main.go:141] libmachine: Using SSH client type: native
	I1002 22:20:35.453755 1481947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34596 <nil> <nil>}
	I1002 22:20:35.453766 1481947 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:20:35.454694 1481947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:20:38.585575 1481947 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-007061
	
	I1002 22:20:38.585598 1481947 ubuntu.go:182] provisioning hostname "newest-cni-007061"
	I1002 22:20:38.585723 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:38.605337 1481947 main.go:141] libmachine: Using SSH client type: native
	I1002 22:20:38.605660 1481947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34596 <nil> <nil>}
	I1002 22:20:38.605678 1481947 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-007061 && echo "newest-cni-007061" | sudo tee /etc/hostname
	I1002 22:20:38.747399 1481947 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-007061
	
	I1002 22:20:38.747472 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:38.768005 1481947 main.go:141] libmachine: Using SSH client type: native
	I1002 22:20:38.768313 1481947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34596 <nil> <nil>}
	I1002 22:20:38.768330 1481947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-007061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-007061/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-007061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:20:38.902356 1481947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:20:38.902381 1481947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:20:38.902402 1481947 ubuntu.go:190] setting up certificates
	I1002 22:20:38.902412 1481947 provision.go:84] configureAuth start
	I1002 22:20:38.902490 1481947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-007061
	I1002 22:20:38.921776 1481947 provision.go:143] copyHostCerts
	I1002 22:20:38.921843 1481947 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:20:38.921860 1481947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:20:38.921939 1481947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:20:38.922116 1481947 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:20:38.922123 1481947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:20:38.922154 1481947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:20:38.922219 1481947 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:20:38.922224 1481947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:20:38.922249 1481947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:20:38.922302 1481947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.newest-cni-007061 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-007061]
	I1002 22:20:39.639433 1481947 provision.go:177] copyRemoteCerts
	I1002 22:20:39.639502 1481947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:20:39.639541 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:39.659683 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:39.758617 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:20:39.778424 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 22:20:39.797297 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:20:39.817487 1481947 provision.go:87] duration metric: took 915.060762ms to configureAuth
	I1002 22:20:39.817523 1481947 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:20:39.817723 1481947 config.go:182] Loaded profile config "newest-cni-007061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:39.817838 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:39.836208 1481947 main.go:141] libmachine: Using SSH client type: native
	I1002 22:20:39.836520 1481947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34596 <nil> <nil>}
	I1002 22:20:39.836538 1481947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:20:40.190968 1481947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:20:40.190991 1481947 machine.go:96] duration metric: took 4.762604273s to provisionDockerMachine
	I1002 22:20:40.191005 1481947 start.go:293] postStartSetup for "newest-cni-007061" (driver="docker")
	I1002 22:20:40.191017 1481947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:20:40.191085 1481947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:20:40.191126 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:40.211445 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:40.314326 1481947 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:20:40.318200 1481947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:20:40.318230 1481947 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:20:40.318242 1481947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:20:40.318298 1481947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:20:40.318383 1481947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:20:40.318493 1481947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:20:40.327150 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:20:40.346929 1481947 start.go:296] duration metric: took 155.90777ms for postStartSetup
	I1002 22:20:40.347050 1481947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:20:40.347102 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:40.364809 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:40.460249 1481947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:20:40.465488 1481947 fix.go:56] duration metric: took 5.384329155s for fixHost
	I1002 22:20:40.465510 1481947 start.go:83] releasing machines lock for "newest-cni-007061", held for 5.384424194s
	I1002 22:20:40.465593 1481947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-007061
	I1002 22:20:40.484111 1481947 ssh_runner.go:195] Run: cat /version.json
	I1002 22:20:40.484131 1481947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:20:40.484181 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:40.484214 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:40.503351 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:40.505849 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:40.598508 1481947 ssh_runner.go:195] Run: systemctl --version
	I1002 22:20:40.706888 1481947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:20:40.753211 1481947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:20:40.758194 1481947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:20:40.758306 1481947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:20:40.766650 1481947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:20:40.766671 1481947 start.go:495] detecting cgroup driver to use...
	I1002 22:20:40.766731 1481947 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:20:40.766806 1481947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:20:40.784434 1481947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:20:40.798118 1481947 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:20:40.798229 1481947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:20:40.814278 1481947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:20:40.828381 1481947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:20:40.937862 1481947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:20:41.061408 1481947 docker.go:234] disabling docker service ...
	I1002 22:20:41.061481 1481947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:20:41.078961 1481947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:20:41.092429 1481947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:20:41.213355 1481947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:20:41.335915 1481947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:20:41.349246 1481947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:20:41.365038 1481947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:20:41.365104 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.375168 1481947 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:20:41.375247 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.384430 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.393848 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.403368 1481947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:20:41.411961 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.423155 1481947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.433515 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.444335 1481947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:20:41.452697 1481947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:20:41.460853 1481947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:20:41.582183 1481947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:20:41.717643 1481947 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:20:41.717785 1481947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:20:41.722105 1481947 start.go:563] Will wait 60s for crictl version
	I1002 22:20:41.722231 1481947 ssh_runner.go:195] Run: which crictl
	I1002 22:20:41.726009 1481947 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:20:41.756726 1481947 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:20:41.756869 1481947 ssh_runner.go:195] Run: crio --version
	I1002 22:20:41.785939 1481947 ssh_runner.go:195] Run: crio --version
	I1002 22:20:41.819861 1481947 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:20:41.822769 1481947 cli_runner.go:164] Run: docker network inspect newest-cni-007061 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:20:41.839718 1481947 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 22:20:41.843654 1481947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:20:41.857014 1481947 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 22:20:41.859859 1481947 kubeadm.go:883] updating cluster {Name:newest-cni-007061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:20:41.859994 1481947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:20:41.860085 1481947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:20:41.895864 1481947 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:20:41.895888 1481947 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:20:41.895944 1481947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:20:41.932612 1481947 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:20:41.932635 1481947 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:20:41.932643 1481947 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 22:20:41.932740 1481947 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-007061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:20:41.932826 1481947 ssh_runner.go:195] Run: crio config
	I1002 22:20:42.005083 1481947 cni.go:84] Creating CNI manager for ""
	I1002 22:20:42.005189 1481947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:20:42.005233 1481947 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 22:20:42.005280 1481947 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-007061 NodeName:newest-cni-007061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:20:42.005477 1481947 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-007061"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:20:42.005596 1481947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:20:42.018926 1481947 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:20:42.019030 1481947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:20:42.029450 1481947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 22:20:42.044057 1481947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:20:42.058959 1481947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1002 22:20:42.076553 1481947 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:20:42.081624 1481947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:20:42.095649 1481947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:20:42.264700 1481947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:20:42.288172 1481947 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061 for IP: 192.168.85.2
	I1002 22:20:42.288214 1481947 certs.go:195] generating shared ca certs ...
	I1002 22:20:42.288233 1481947 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:42.288433 1481947 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:20:42.288508 1481947 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:20:42.288521 1481947 certs.go:257] generating profile certs ...
	I1002 22:20:42.288624 1481947 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/client.key
	I1002 22:20:42.288702 1481947 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.key.e4a84433
	I1002 22:20:42.288750 1481947 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.key
	I1002 22:20:42.288867 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:20:42.288904 1481947 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:20:42.288917 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:20:42.288943 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:20:42.288970 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:20:42.288995 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:20:42.289053 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:20:42.289796 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:20:42.317997 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:20:42.342562 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:20:42.363430 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:20:42.383595 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 22:20:42.402786 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:20:42.427610 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:20:42.450469 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:20:42.475673 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:20:42.497320 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:20:42.522485 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:20:42.543395 1481947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:20:42.557235 1481947 ssh_runner.go:195] Run: openssl version
	I1002 22:20:42.568420 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:20:42.578192 1481947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:20:42.582158 1481947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:20:42.582273 1481947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:20:42.624889 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:20:42.633717 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:20:42.645033 1481947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:20:42.649032 1481947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:20:42.649103 1481947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:20:42.690113 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:20:42.698725 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:20:42.707276 1481947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:20:42.711518 1481947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:20:42.711591 1481947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:20:42.752854 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:20:42.760923 1481947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:20:42.764699 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:20:42.806191 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:20:42.847801 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:20:42.888953 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:20:42.930010 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:20:42.983935 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:20:43.081394 1481947 kubeadm.go:400] StartCluster: {Name:newest-cni-007061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:20:43.081541 1481947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:20:43.081647 1481947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:20:43.146585 1481947 cri.go:89] found id: "1647dd59c85ece18e477c3ed89cf0b1d6fdff809cbd56718dcfcb97b915b83e4"
	I1002 22:20:43.146650 1481947 cri.go:89] found id: "b1257b68a4678d0075bb779bbc3dab7c4d2a1f18219e8d82946226a0adaa9b98"
	I1002 22:20:43.146667 1481947 cri.go:89] found id: ""
	I1002 22:20:43.146757 1481947 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:20:43.187663 1481947 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:20:43Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:20:43.187830 1481947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:20:43.201550 1481947 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:20:43.201611 1481947 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:20:43.201703 1481947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:20:43.211977 1481947 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:20:43.212623 1481947 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-007061" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:20:43.212936 1481947 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-007061" cluster setting kubeconfig missing "newest-cni-007061" context setting]
	I1002 22:20:43.213415 1481947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:43.215073 1481947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:20:43.232303 1481947 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 22:20:43.232378 1481947 kubeadm.go:601] duration metric: took 30.740482ms to restartPrimaryControlPlane
	I1002 22:20:43.232401 1481947 kubeadm.go:402] duration metric: took 151.01774ms to StartCluster
	I1002 22:20:43.232440 1481947 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:43.232523 1481947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:20:43.233478 1481947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:43.233741 1481947 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:20:43.234157 1481947 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:20:43.234232 1481947 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-007061"
	I1002 22:20:43.234246 1481947 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-007061"
	W1002 22:20:43.234252 1481947 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:20:43.234272 1481947 host.go:66] Checking if "newest-cni-007061" exists ...
	I1002 22:20:43.234731 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:43.234894 1481947 config.go:182] Loaded profile config "newest-cni-007061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:43.234978 1481947 addons.go:69] Setting dashboard=true in profile "newest-cni-007061"
	I1002 22:20:43.235009 1481947 addons.go:238] Setting addon dashboard=true in "newest-cni-007061"
	W1002 22:20:43.235032 1481947 addons.go:247] addon dashboard should already be in state true
	I1002 22:20:43.235079 1481947 host.go:66] Checking if "newest-cni-007061" exists ...
	I1002 22:20:43.235510 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:43.237081 1481947 addons.go:69] Setting default-storageclass=true in profile "newest-cni-007061"
	I1002 22:20:43.237116 1481947 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-007061"
	I1002 22:20:43.237887 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:43.242087 1481947 out.go:179] * Verifying Kubernetes components...
	I1002 22:20:43.252336 1481947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:20:43.286077 1481947 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:20:43.292111 1481947 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:20:43.292221 1481947 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:20:43.292231 1481947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:20:43.292317 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:43.303253 1481947 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1002 22:20:40.918171 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	W1002 22:20:42.918288 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	I1002 22:20:43.303852 1481947 addons.go:238] Setting addon default-storageclass=true in "newest-cni-007061"
	W1002 22:20:43.303870 1481947 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:20:43.303922 1481947 host.go:66] Checking if "newest-cni-007061" exists ...
	I1002 22:20:43.304367 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:43.306443 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:20:43.306472 1481947 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:20:43.306542 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:43.349999 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:43.353339 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:43.362476 1481947 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:20:43.362504 1481947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:20:43.362568 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:43.391702 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:43.612370 1481947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:20:43.618406 1481947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:20:43.648661 1481947 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:20:43.648810 1481947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:20:43.657952 1481947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:20:43.682411 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:20:43.682486 1481947 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:20:43.772078 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:20:43.772150 1481947 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:20:43.843086 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:20:43.843164 1481947 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:20:43.875147 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:20:43.875220 1481947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:20:43.896454 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:20:43.896532 1481947 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:20:43.913104 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:20:43.913179 1481947 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:20:43.939077 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:20:43.939154 1481947 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:20:43.961404 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:20:43.961480 1481947 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:20:43.987274 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:20:43.987347 1481947 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:20:44.013862 1481947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1002 22:20:44.920721 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	I1002 22:20:46.420269 1477309 pod_ready.go:94] pod "coredns-66bc5c9577-rj4bn" is "Ready"
	I1002 22:20:46.420302 1477309 pod_ready.go:86] duration metric: took 36.508388071s for pod "coredns-66bc5c9577-rj4bn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.425264 1477309 pod_ready.go:83] waiting for pod "etcd-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.435794 1477309 pod_ready.go:94] pod "etcd-no-preload-975002" is "Ready"
	I1002 22:20:46.435836 1477309 pod_ready.go:86] duration metric: took 10.463097ms for pod "etcd-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.439664 1477309 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.451808 1477309 pod_ready.go:94] pod "kube-apiserver-no-preload-975002" is "Ready"
	I1002 22:20:46.451833 1477309 pod_ready.go:86] duration metric: took 12.140679ms for pod "kube-apiserver-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.459644 1477309 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.616674 1477309 pod_ready.go:94] pod "kube-controller-manager-no-preload-975002" is "Ready"
	I1002 22:20:46.616774 1477309 pod_ready.go:86] duration metric: took 157.04456ms for pod "kube-controller-manager-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.817107 1477309 pod_ready.go:83] waiting for pod "kube-proxy-lzzt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:47.215834 1477309 pod_ready.go:94] pod "kube-proxy-lzzt4" is "Ready"
	I1002 22:20:47.215859 1477309 pod_ready.go:86] duration metric: took 398.6635ms for pod "kube-proxy-lzzt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:47.416237 1477309 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:47.816336 1477309 pod_ready.go:94] pod "kube-scheduler-no-preload-975002" is "Ready"
	I1002 22:20:47.816369 1477309 pod_ready.go:86] duration metric: took 400.096615ms for pod "kube-scheduler-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:47.816389 1477309 pod_ready.go:40] duration metric: took 37.908835354s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:20:47.922261 1477309 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:20:47.925640 1477309 out.go:179] * Done! kubectl is now configured to use "no-preload-975002" cluster and "default" namespace by default
	I1002 22:20:48.952016 1481947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.333529152s)
	I1002 22:20:48.952379 1481947 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.303511266s)
	I1002 22:20:48.952403 1481947 api_server.go:72] duration metric: took 5.718614601s to wait for apiserver process to appear ...
	I1002 22:20:48.952409 1481947 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:20:48.952422 1481947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:20:48.990503 1481947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 22:20:48.990533 1481947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 22:20:49.453183 1481947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:20:49.461852 1481947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 22:20:49.461891 1481947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 22:20:49.709398 1481947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.051363098s)
	I1002 22:20:49.709508 1481947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.695569927s)
	I1002 22:20:49.712610 1481947 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-007061 addons enable metrics-server
	
	I1002 22:20:49.715692 1481947 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1002 22:20:49.718582 1481947 addons.go:514] duration metric: took 6.484405302s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1002 22:20:49.953280 1481947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:20:49.970201 1481947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 22:20:49.970236 1481947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 22:20:50.452801 1481947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:20:50.461990 1481947 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:20:50.463247 1481947 api_server.go:141] control plane version: v1.34.1
	I1002 22:20:50.463277 1481947 api_server.go:131] duration metric: took 1.510860282s to wait for apiserver health ...
	I1002 22:20:50.463287 1481947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:20:50.467241 1481947 system_pods.go:59] 8 kube-system pods found
	I1002 22:20:50.467284 1481947 system_pods.go:61] "coredns-66bc5c9577-h7pp8" [c67b11a7-df6e-47e6-ad4f-d31506dd89b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 22:20:50.467295 1481947 system_pods.go:61] "etcd-newest-cni-007061" [d0ab93e4-ccf4-4b37-9203-848cd4c28976] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:20:50.467317 1481947 system_pods.go:61] "kindnet-h2fvd" [b797cb62-e31f-4e5b-825b-81189902db5f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 22:20:50.467330 1481947 system_pods.go:61] "kube-apiserver-newest-cni-007061" [6aec1d67-1e4b-4386-b8a8-4ff00284349f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:20:50.467337 1481947 system_pods.go:61] "kube-controller-manager-newest-cni-007061" [d00bdb1f-2170-4f7e-815a-5deb837a0264] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:20:50.467352 1481947 system_pods.go:61] "kube-proxy-m892s" [1995a215-d278-4c89-b447-26a58362aab5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 22:20:50.467360 1481947 system_pods.go:61] "kube-scheduler-newest-cni-007061" [c7a188d8-ffeb-4928-91de-f1bcd7a14aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:20:50.467373 1481947 system_pods.go:61] "storage-provisioner" [063bf5bf-9b28-43fe-9f9e-f76c3dc4bd44] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 22:20:50.467389 1481947 system_pods.go:74] duration metric: took 4.089773ms to wait for pod list to return data ...
	I1002 22:20:50.467406 1481947 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:20:50.471487 1481947 default_sa.go:45] found service account: "default"
	I1002 22:20:50.471514 1481947 default_sa.go:55] duration metric: took 4.101932ms for default service account to be created ...
	I1002 22:20:50.471527 1481947 kubeadm.go:586] duration metric: took 7.23773756s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 22:20:50.471545 1481947 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:20:50.474402 1481947 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:20:50.474450 1481947 node_conditions.go:123] node cpu capacity is 2
	I1002 22:20:50.474462 1481947 node_conditions.go:105] duration metric: took 2.912891ms to run NodePressure ...
	I1002 22:20:50.474474 1481947 start.go:241] waiting for startup goroutines ...
	I1002 22:20:50.474483 1481947 start.go:246] waiting for cluster config update ...
	I1002 22:20:50.474495 1481947 start.go:255] writing updated cluster config ...
	I1002 22:20:50.474801 1481947 ssh_runner.go:195] Run: rm -f paused
	I1002 22:20:50.539853 1481947 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:20:50.545839 1481947 out.go:179] * Done! kubectl is now configured to use "newest-cni-007061" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.739760973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.748687146Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-m892s/POD" id=cbeda2e6-f8d9-410a-85d9-77d82b63e74c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.748759596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.750063441Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=208036e7-d3d1-4cde-a328-ce2b7f67b8ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.752518806Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cbeda2e6-f8d9-410a-85d9-77d82b63e74c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.759539881Z" level=info msg="Ran pod sandbox b1bad2370f1aebbd229bdd7cdb63e0aef331d7fe74207f2d3860a5964ab66010 with infra container: kube-system/kindnet-h2fvd/POD" id=208036e7-d3d1-4cde-a328-ce2b7f67b8ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.761153538Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9af9d48e-f35d-4a7a-bf7f-f78b8f4e3f86 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.77252509Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3430c1c6-58ef-4111-9cf5-cc8916a05c37 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.775685689Z" level=info msg="Ran pod sandbox 76ec07d9b07eee8410e703931031e6744383a583b1b340f147cda4ec48f4320d with infra container: kube-system/kube-proxy-m892s/POD" id=cbeda2e6-f8d9-410a-85d9-77d82b63e74c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.77605429Z" level=info msg="Creating container: kube-system/kindnet-h2fvd/kindnet-cni" id=a2f0886b-0509-4ba5-b9ec-259a9a2bf5fc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.78108647Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7bbe5355-d54f-4edd-9b09-8ce0bae3fe1e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.784342714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.790206543Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2d10a1d8-6297-4180-9c93-850a4441461b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.791658496Z" level=info msg="Creating container: kube-system/kube-proxy-m892s/kube-proxy" id=e484d3b1-eae0-4e8e-a9ff-0ccc1b190660 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.792131611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.796448727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.797109737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.80338872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.804083329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.829631716Z" level=info msg="Created container 4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be: kube-system/kindnet-h2fvd/kindnet-cni" id=a2f0886b-0509-4ba5-b9ec-259a9a2bf5fc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.830382431Z" level=info msg="Starting container: 4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be" id=c74875da-01bf-4dc1-91d2-0e53c65d2388 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.8324034Z" level=info msg="Started container" PID=1059 containerID=4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be description=kube-system/kindnet-h2fvd/kindnet-cni id=c74875da-01bf-4dc1-91d2-0e53c65d2388 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1bad2370f1aebbd229bdd7cdb63e0aef331d7fe74207f2d3860a5964ab66010
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.84579798Z" level=info msg="Created container fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71: kube-system/kube-proxy-m892s/kube-proxy" id=e484d3b1-eae0-4e8e-a9ff-0ccc1b190660 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.846925976Z" level=info msg="Starting container: fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71" id=b3bf8440-8d51-47ba-ab3c-dd75d14ea95e name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.851038279Z" level=info msg="Started container" PID=1063 containerID=fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71 description=kube-system/kube-proxy-m892s/kube-proxy id=b3bf8440-8d51-47ba-ab3c-dd75d14ea95e name=/runtime.v1.RuntimeService/StartContainer sandboxID=76ec07d9b07eee8410e703931031e6744383a583b1b340f147cda4ec48f4320d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fd93b88e0fbf0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   76ec07d9b07ee       kube-proxy-m892s                            kube-system
	4dc95cb82fa9c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   b1bad2370f1ae       kindnet-h2fvd                               kube-system
	0f7ed4457e4dd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   c44eed6c6ce85       kube-scheduler-newest-cni-007061            kube-system
	2a391833bd371       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   5ebf973142ff8       etcd-newest-cni-007061                      kube-system
	1647dd59c85ec       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   a13911fb27326       kube-controller-manager-newest-cni-007061   kube-system
	b1257b68a4678       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   d263437eac2ed       kube-apiserver-newest-cni-007061            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-007061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-007061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=newest-cni-007061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_20_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:20:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-007061
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:20:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-007061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 aca5a2dd38314baeac04f0600b7f0a8f
	  System UUID:                dd20f051-28eb-4702-9fc4-2f1d38d2bf49
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-007061                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-h2fvd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-007061             250m (12%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-007061    200m (10%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-m892s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-007061             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node newest-cni-007061 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 42s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node newest-cni-007061 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node newest-cni-007061 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 42s                kubelet          Starting kubelet.
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  30s                kubelet          Node newest-cni-007061 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-007061 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     30s                kubelet          Node newest-cni-007061 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-007061 event: Registered Node newest-cni-007061 in Controller
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-007061 event: Registered Node newest-cni-007061 in Controller
	
	
	==> dmesg <==
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:18] overlayfs: idmapped layers are currently not supported
	[ +12.782696] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:20] overlayfs: idmapped layers are currently not supported
	[ +29.672765] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2a391833bd371b962840f5da1a5dd64f92ab4d26ed844dd8d9839166a6995da0] <==
	{"level":"warn","ts":"2025-10-02T22:20:46.362934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.385095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.422501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.478218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.502468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.529934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.563652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.614786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.647512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.686434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.726507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.762211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.804084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.922664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.927046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.964501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.995011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.037028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.074015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.117233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.167991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.218459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.265637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.284690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.461492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39372","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:20:54 up  7:02,  0 user,  load average: 4.84, 3.85, 2.74
	Linux newest-cni-007061 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be] <==
	I1002 22:20:50.009983       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:20:50.011842       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 22:20:50.012010       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:20:50.012031       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:20:50.012047       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:20:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:20:50.210586       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:20:50.210668       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:20:50.210705       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:20:50.211652       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [b1257b68a4678d0075bb779bbc3dab7c4d2a1f18219e8d82946226a0adaa9b98] <==
	I1002 22:20:48.724050       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:20:48.733513       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:20:48.733544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:20:48.733734       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 22:20:48.733817       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 22:20:48.733857       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 22:20:48.734532       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:20:48.744440       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 22:20:48.767329       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 22:20:48.770106       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 22:20:48.773957       1 aggregator.go:171] initial CRD sync complete...
	I1002 22:20:48.773979       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 22:20:48.773986       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:20:48.773994       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:20:49.297620       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 22:20:49.336684       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:20:49.366740       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:20:49.377983       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:20:49.386818       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:20:49.437421       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:20:49.497731       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.172.151"}
	I1002 22:20:49.534485       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.133.207"}
	I1002 22:20:52.007538       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 22:20:52.308124       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 22:20:52.567963       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1647dd59c85ece18e477c3ed89cf0b1d6fdff809cbd56718dcfcb97b915b83e4] <==
	I1002 22:20:52.007754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 22:20:52.012046       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 22:20:52.015720       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 22:20:52.016014       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:20:52.016073       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:20:52.020063       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:20:52.028094       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 22:20:52.028143       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 22:20:52.028169       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 22:20:52.028174       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 22:20:52.028179       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 22:20:52.031976       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 22:20:52.035586       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 22:20:52.043783       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:20:52.050783       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 22:20:52.050826       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 22:20:52.050906       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 22:20:52.051003       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 22:20:52.051428       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 22:20:52.051871       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:20:52.051973       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:20:52.058242       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 22:20:52.060988       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:20:52.061016       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:20:52.061023       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71] <==
	I1002 22:20:49.963454       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:20:50.062232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:20:50.163327       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:20:50.163363       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 22:20:50.163458       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:20:50.182268       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:20:50.182340       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:20:50.185778       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:20:50.187793       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:20:50.187867       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:20:50.193309       1 config.go:200] "Starting service config controller"
	I1002 22:20:50.193422       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:20:50.193492       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:20:50.193548       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:20:50.193593       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:20:50.193641       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:20:50.193946       1 config.go:309] "Starting node config controller"
	I1002 22:20:50.194012       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:20:50.194495       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:20:50.293837       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:20:50.293835       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:20:50.293855       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0f7ed4457e4dd510d818c61d551390c715d5526db66159ff7ebad267e8eeae6c] <==
	I1002 22:20:45.453720       1 serving.go:386] Generated self-signed cert in-memory
	I1002 22:20:48.914384       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 22:20:48.920581       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:20:48.935214       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:20:48.939941       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 22:20:48.939976       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 22:20:48.940006       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 22:20:48.940710       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:48.940728       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:48.942785       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:20:48.942797       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:20:49.040144       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 22:20:49.040833       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:49.043825       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.627554     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.865693     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.865805     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.865848     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: E1002 22:20:48.866147     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-007061\" already exists" pod="kube-system/etcd-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.866164     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.866957     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: E1002 22:20:48.918878     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-007061\" already exists" pod="kube-system/kube-apiserver-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.918928     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: E1002 22:20:48.933086     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-007061\" already exists" pod="kube-system/kube-controller-manager-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.933150     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: E1002 22:20:48.997076     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-007061\" already exists" pod="kube-system/kube-scheduler-newest-cni-007061"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.428803     728 apiserver.go:52] "Watching apiserver"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.527489     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.554874     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b797cb62-e31f-4e5b-825b-81189902db5f-xtables-lock\") pod \"kindnet-h2fvd\" (UID: \"b797cb62-e31f-4e5b-825b-81189902db5f\") " pod="kube-system/kindnet-h2fvd"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.554940     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1995a215-d278-4c89-b447-26a58362aab5-lib-modules\") pod \"kube-proxy-m892s\" (UID: \"1995a215-d278-4c89-b447-26a58362aab5\") " pod="kube-system/kube-proxy-m892s"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.554966     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b797cb62-e31f-4e5b-825b-81189902db5f-cni-cfg\") pod \"kindnet-h2fvd\" (UID: \"b797cb62-e31f-4e5b-825b-81189902db5f\") " pod="kube-system/kindnet-h2fvd"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.554985     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b797cb62-e31f-4e5b-825b-81189902db5f-lib-modules\") pod \"kindnet-h2fvd\" (UID: \"b797cb62-e31f-4e5b-825b-81189902db5f\") " pod="kube-system/kindnet-h2fvd"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.555012     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1995a215-d278-4c89-b447-26a58362aab5-xtables-lock\") pod \"kube-proxy-m892s\" (UID: \"1995a215-d278-4c89-b447-26a58362aab5\") " pod="kube-system/kube-proxy-m892s"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.587874     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: W1002 22:20:49.774143     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/crio-76ec07d9b07eee8410e703931031e6744383a583b1b340f147cda4ec48f4320d WatchSource:0}: Error finding container 76ec07d9b07eee8410e703931031e6744383a583b1b340f147cda4ec48f4320d: Status 404 returned error can't find the container with id 76ec07d9b07eee8410e703931031e6744383a583b1b340f147cda4ec48f4320d
	Oct 02 22:20:51 newest-cni-007061 kubelet[728]: I1002 22:20:51.718819     728 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 02 22:20:51 newest-cni-007061 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:20:51 newest-cni-007061 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:20:51 newest-cni-007061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-007061 -n newest-cni-007061
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-007061 -n newest-cni-007061: exit status 2 (356.585655ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-007061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h7pp8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7j6xr kubernetes-dashboard-855c9754f9-tdp9m
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-007061 describe pod coredns-66bc5c9577-h7pp8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7j6xr kubernetes-dashboard-855c9754f9-tdp9m
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-007061 describe pod coredns-66bc5c9577-h7pp8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7j6xr kubernetes-dashboard-855c9754f9-tdp9m: exit status 1 (104.163736ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h7pp8" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-7j6xr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-tdp9m" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-007061 describe pod coredns-66bc5c9577-h7pp8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7j6xr kubernetes-dashboard-855c9754f9-tdp9m: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-007061
helpers_test.go:243: (dbg) docker inspect newest-cni-007061:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01",
	        "Created": "2025-10-02T22:19:49.208440001Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1482075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:20:35.141400385Z",
	            "FinishedAt": "2025-10-02T22:20:34.323116521Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/hostname",
	        "HostsPath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/hosts",
	        "LogPath": "/var/lib/docker/containers/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01-json.log",
	        "Name": "/newest-cni-007061",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-007061:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-007061",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01",
	                "LowerDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15250cb30223d6aee528555c0adbfb6ce8dec1ff43a26df49cf19b6c143b13ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-007061",
	                "Source": "/var/lib/docker/volumes/newest-cni-007061/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-007061",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-007061",
	                "name.minikube.sigs.k8s.io": "newest-cni-007061",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "716d49888bf3aedede5d8a2d2f1c0e2ece12dd495e463487a2bba96829338302",
	            "SandboxKey": "/var/run/docker/netns/716d49888bf3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34596"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34597"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34600"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34598"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34599"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-007061": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:37:33:b4:03:70",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fe85bc902fc6ddde3be87025823d1d70984e1f5f4e60ca56b5f7626fbe228993",
	                    "EndpointID": "b4dff446422461d59395166b6c07cbf2ca90468e10c7a02d786d448534293130",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-007061",
	                        "3375b860c995"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-007061 -n newest-cni-007061
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-007061 -n newest-cni-007061: exit status 2 (354.343643ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-007061 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-007061 logs -n 25: (1.095019078s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ default-k8s-diff-port-230628 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ pause   │ -p default-k8s-diff-port-230628 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable metrics-server -p embed-certs-080134 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-230628                                                                                                                                                                                                               │ default-k8s-diff-port-230628 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ delete  │ -p disable-driver-mounts-607037                                                                                                                                                                                                               │ disable-driver-mounts-607037 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ stop    │ -p embed-certs-080134 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable dashboard -p embed-certs-080134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ image   │ embed-certs-080134 image list --format=json                                                                                                                                                                                                   │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ pause   │ -p embed-certs-080134 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ stop    │ -p no-preload-975002 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ delete  │ -p embed-certs-080134                                                                                                                                                                                                                         │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ delete  │ -p embed-certs-080134                                                                                                                                                                                                                         │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ start   │ -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable dashboard -p no-preload-975002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-007061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	│ stop    │ -p newest-cni-007061 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-007061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ start   │ -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ image   │ newest-cni-007061 image list --format=json                                                                                                                                                                                                    │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ pause   │ -p newest-cni-007061 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:20:34
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:20:34.851248 1481947 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:20:34.851400 1481947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:34.851413 1481947 out.go:374] Setting ErrFile to fd 2...
	I1002 22:20:34.851418 1481947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:34.851742 1481947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:20:34.852722 1481947 out.go:368] Setting JSON to false
	I1002 22:20:34.853711 1481947 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25360,"bootTime":1759418275,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:20:34.853784 1481947 start.go:140] virtualization:  
	I1002 22:20:34.859026 1481947 out.go:179] * [newest-cni-007061] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:20:34.862141 1481947 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:20:34.862187 1481947 notify.go:220] Checking for updates...
	I1002 22:20:34.866439 1481947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:20:34.869338 1481947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:20:34.872249 1481947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:20:34.875374 1481947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:20:34.878359 1481947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:20:34.881773 1481947 config.go:182] Loaded profile config "newest-cni-007061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:34.882480 1481947 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:20:34.912894 1481947 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:20:34.913065 1481947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:20:34.970933 1481947 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:20:34.961495552 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:20:34.971047 1481947 docker.go:318] overlay module found
	I1002 22:20:34.974236 1481947 out.go:179] * Using the docker driver based on existing profile
	I1002 22:20:34.976979 1481947 start.go:304] selected driver: docker
	I1002 22:20:34.976998 1481947 start.go:924] validating driver "docker" against &{Name:newest-cni-007061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:20:34.977101 1481947 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:20:34.977848 1481947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:20:35.038882 1481947 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:20:35.027831083 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:20:35.039287 1481947 start_flags.go:1021] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 22:20:35.039325 1481947 cni.go:84] Creating CNI manager for ""
	I1002 22:20:35.039388 1481947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:20:35.039463 1481947 start.go:348] cluster config:
	{Name:newest-cni-007061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:20:35.042751 1481947 out.go:179] * Starting "newest-cni-007061" primary control-plane node in "newest-cni-007061" cluster
	I1002 22:20:35.045659 1481947 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:20:35.048689 1481947 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:20:35.051684 1481947 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:20:35.051764 1481947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:20:35.051862 1481947 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:20:35.051874 1481947 cache.go:58] Caching tarball of preloaded images
	I1002 22:20:35.052169 1481947 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:20:35.052194 1481947 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:20:35.052330 1481947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/config.json ...
	I1002 22:20:35.080926 1481947 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:20:35.080952 1481947 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:20:35.080971 1481947 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:20:35.081000 1481947 start.go:360] acquireMachinesLock for newest-cni-007061: {Name:mk07ea86d3b6a688131669b97ec51445de367e54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:20:35.081074 1481947 start.go:364] duration metric: took 50.214µs to acquireMachinesLock for "newest-cni-007061"
	I1002 22:20:35.081138 1481947 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:20:35.081161 1481947 fix.go:54] fixHost starting: 
	I1002 22:20:35.081473 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:35.102275 1481947 fix.go:112] recreateIfNeeded on newest-cni-007061: state=Stopped err=<nil>
	W1002 22:20:35.102330 1481947 fix.go:138] unexpected machine state, will restart: <nil>
	W1002 22:20:36.417059 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	W1002 22:20:38.417850 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	I1002 22:20:35.105612 1481947 out.go:252] * Restarting existing docker container for "newest-cni-007061" ...
	I1002 22:20:35.105722 1481947 cli_runner.go:164] Run: docker start newest-cni-007061
	I1002 22:20:35.384158 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:35.403177 1481947 kic.go:430] container "newest-cni-007061" state is running.
	I1002 22:20:35.403569 1481947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-007061
	I1002 22:20:35.428143 1481947 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/config.json ...
	I1002 22:20:35.428377 1481947 machine.go:93] provisionDockerMachine start ...
	I1002 22:20:35.428443 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:35.453359 1481947 main.go:141] libmachine: Using SSH client type: native
	I1002 22:20:35.453755 1481947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34596 <nil> <nil>}
	I1002 22:20:35.453766 1481947 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 22:20:35.454694 1481947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:20:38.585575 1481947 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-007061
	
	I1002 22:20:38.585598 1481947 ubuntu.go:182] provisioning hostname "newest-cni-007061"
	I1002 22:20:38.585723 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:38.605337 1481947 main.go:141] libmachine: Using SSH client type: native
	I1002 22:20:38.605660 1481947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34596 <nil> <nil>}
	I1002 22:20:38.605678 1481947 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-007061 && echo "newest-cni-007061" | sudo tee /etc/hostname
	I1002 22:20:38.747399 1481947 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-007061
	
	I1002 22:20:38.747472 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:38.768005 1481947 main.go:141] libmachine: Using SSH client type: native
	I1002 22:20:38.768313 1481947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34596 <nil> <nil>}
	I1002 22:20:38.768330 1481947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-007061' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-007061/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-007061' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:20:38.902356 1481947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:20:38.902381 1481947 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-1270657/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-1270657/.minikube}
	I1002 22:20:38.902402 1481947 ubuntu.go:190] setting up certificates
	I1002 22:20:38.902412 1481947 provision.go:84] configureAuth start
	I1002 22:20:38.902490 1481947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-007061
	I1002 22:20:38.921776 1481947 provision.go:143] copyHostCerts
	I1002 22:20:38.921843 1481947 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem, removing ...
	I1002 22:20:38.921860 1481947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem
	I1002 22:20:38.921939 1481947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.pem (1078 bytes)
	I1002 22:20:38.922116 1481947 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem, removing ...
	I1002 22:20:38.922123 1481947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem
	I1002 22:20:38.922154 1481947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/cert.pem (1123 bytes)
	I1002 22:20:38.922219 1481947 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem, removing ...
	I1002 22:20:38.922224 1481947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem
	I1002 22:20:38.922249 1481947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-1270657/.minikube/key.pem (1675 bytes)
	I1002 22:20:38.922302 1481947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem org=jenkins.newest-cni-007061 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-007061]
	I1002 22:20:39.639433 1481947 provision.go:177] copyRemoteCerts
	I1002 22:20:39.639502 1481947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:20:39.639541 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:39.659683 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:39.758617 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 22:20:39.778424 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 22:20:39.797297 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:20:39.817487 1481947 provision.go:87] duration metric: took 915.060762ms to configureAuth
	I1002 22:20:39.817523 1481947 ubuntu.go:206] setting minikube options for container-runtime
	I1002 22:20:39.817723 1481947 config.go:182] Loaded profile config "newest-cni-007061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:39.817838 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:39.836208 1481947 main.go:141] libmachine: Using SSH client type: native
	I1002 22:20:39.836520 1481947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34596 <nil> <nil>}
	I1002 22:20:39.836538 1481947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:20:40.190968 1481947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:20:40.190991 1481947 machine.go:96] duration metric: took 4.762604273s to provisionDockerMachine
	I1002 22:20:40.191005 1481947 start.go:293] postStartSetup for "newest-cni-007061" (driver="docker")
	I1002 22:20:40.191017 1481947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:20:40.191085 1481947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:20:40.191126 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:40.211445 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:40.314326 1481947 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:20:40.318200 1481947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:20:40.318230 1481947 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 22:20:40.318242 1481947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/addons for local assets ...
	I1002 22:20:40.318298 1481947 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-1270657/.minikube/files for local assets ...
	I1002 22:20:40.318383 1481947 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem -> 12725142.pem in /etc/ssl/certs
	I1002 22:20:40.318493 1481947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:20:40.327150 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:20:40.346929 1481947 start.go:296] duration metric: took 155.90777ms for postStartSetup
	I1002 22:20:40.347050 1481947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:20:40.347102 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:40.364809 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:40.460249 1481947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:20:40.465488 1481947 fix.go:56] duration metric: took 5.384329155s for fixHost
	I1002 22:20:40.465510 1481947 start.go:83] releasing machines lock for "newest-cni-007061", held for 5.384424194s
	I1002 22:20:40.465593 1481947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-007061
	I1002 22:20:40.484111 1481947 ssh_runner.go:195] Run: cat /version.json
	I1002 22:20:40.484131 1481947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:20:40.484181 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:40.484214 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:40.503351 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:40.505849 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:40.598508 1481947 ssh_runner.go:195] Run: systemctl --version
	I1002 22:20:40.706888 1481947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:20:40.753211 1481947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 22:20:40.758194 1481947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 22:20:40.758306 1481947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:20:40.766650 1481947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:20:40.766671 1481947 start.go:495] detecting cgroup driver to use...
	I1002 22:20:40.766731 1481947 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 22:20:40.766806 1481947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:20:40.784434 1481947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:20:40.798118 1481947 docker.go:218] disabling cri-docker service (if available) ...
	I1002 22:20:40.798229 1481947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:20:40.814278 1481947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:20:40.828381 1481947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:20:40.937862 1481947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:20:41.061408 1481947 docker.go:234] disabling docker service ...
	I1002 22:20:41.061481 1481947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:20:41.078961 1481947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:20:41.092429 1481947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:20:41.213355 1481947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:20:41.335915 1481947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:20:41.349246 1481947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:20:41.365038 1481947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 22:20:41.365104 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.375168 1481947 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:20:41.375247 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.384430 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.393848 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.403368 1481947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:20:41.411961 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.423155 1481947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.433515 1481947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:20:41.444335 1481947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:20:41.452697 1481947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:20:41.460853 1481947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:20:41.582183 1481947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:20:41.717643 1481947 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:20:41.717785 1481947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:20:41.722105 1481947 start.go:563] Will wait 60s for crictl version
	I1002 22:20:41.722231 1481947 ssh_runner.go:195] Run: which crictl
	I1002 22:20:41.726009 1481947 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 22:20:41.756726 1481947 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 22:20:41.756869 1481947 ssh_runner.go:195] Run: crio --version
	I1002 22:20:41.785939 1481947 ssh_runner.go:195] Run: crio --version
	I1002 22:20:41.819861 1481947 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 22:20:41.822769 1481947 cli_runner.go:164] Run: docker network inspect newest-cni-007061 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:20:41.839718 1481947 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1002 22:20:41.843654 1481947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:20:41.857014 1481947 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1002 22:20:41.859859 1481947 kubeadm.go:883] updating cluster {Name:newest-cni-007061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 22:20:41.859994 1481947 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:20:41.860085 1481947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:20:41.895864 1481947 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:20:41.895888 1481947 crio.go:433] Images already preloaded, skipping extraction
	I1002 22:20:41.895944 1481947 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:20:41.932612 1481947 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 22:20:41.932635 1481947 cache_images.go:85] Images are preloaded, skipping loading
	I1002 22:20:41.932643 1481947 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1002 22:20:41.932740 1481947 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-007061 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 22:20:41.932826 1481947 ssh_runner.go:195] Run: crio config
	I1002 22:20:42.005083 1481947 cni.go:84] Creating CNI manager for ""
	I1002 22:20:42.005189 1481947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:20:42.005233 1481947 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1002 22:20:42.005280 1481947 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-007061 NodeName:newest-cni-007061 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:20:42.005477 1481947 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-007061"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:20:42.005596 1481947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 22:20:42.018926 1481947 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:20:42.019030 1481947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:20:42.029450 1481947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 22:20:42.044057 1481947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:20:42.058959 1481947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1002 22:20:42.076553 1481947 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:20:42.081624 1481947 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:20:42.095649 1481947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:20:42.264700 1481947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:20:42.288172 1481947 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061 for IP: 192.168.85.2
	I1002 22:20:42.288214 1481947 certs.go:195] generating shared ca certs ...
	I1002 22:20:42.288233 1481947 certs.go:227] acquiring lock for ca certs: {Name:mka3dc6a23a905fa28c2d5853908c990951cca8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:42.288433 1481947 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key
	I1002 22:20:42.288508 1481947 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key
	I1002 22:20:42.288521 1481947 certs.go:257] generating profile certs ...
	I1002 22:20:42.288624 1481947 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/client.key
	I1002 22:20:42.288702 1481947 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.key.e4a84433
	I1002 22:20:42.288750 1481947 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.key
	I1002 22:20:42.288867 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem (1338 bytes)
	W1002 22:20:42.288904 1481947 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514_empty.pem, impossibly tiny 0 bytes
	I1002 22:20:42.288917 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:20:42.288943 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem (1078 bytes)
	I1002 22:20:42.288970 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:20:42.288995 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/key.pem (1675 bytes)
	I1002 22:20:42.289053 1481947 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem (1708 bytes)
	I1002 22:20:42.289796 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:20:42.317997 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 22:20:42.342562 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:20:42.363430 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:20:42.383595 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 22:20:42.402786 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:20:42.427610 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:20:42.450469 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/newest-cni-007061/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:20:42.475673 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/ssl/certs/12725142.pem --> /usr/share/ca-certificates/12725142.pem (1708 bytes)
	I1002 22:20:42.497320 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:20:42.522485 1481947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/1272514.pem --> /usr/share/ca-certificates/1272514.pem (1338 bytes)
	I1002 22:20:42.543395 1481947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:20:42.557235 1481947 ssh_runner.go:195] Run: openssl version
	I1002 22:20:42.568420 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12725142.pem && ln -fs /usr/share/ca-certificates/12725142.pem /etc/ssl/certs/12725142.pem"
	I1002 22:20:42.578192 1481947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12725142.pem
	I1002 22:20:42.582158 1481947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:13 /usr/share/ca-certificates/12725142.pem
	I1002 22:20:42.582273 1481947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12725142.pem
	I1002 22:20:42.624889 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12725142.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:20:42.633717 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:20:42.645033 1481947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:20:42.649032 1481947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:06 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:20:42.649103 1481947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:20:42.690113 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:20:42.698725 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1272514.pem && ln -fs /usr/share/ca-certificates/1272514.pem /etc/ssl/certs/1272514.pem"
	I1002 22:20:42.707276 1481947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1272514.pem
	I1002 22:20:42.711518 1481947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:13 /usr/share/ca-certificates/1272514.pem
	I1002 22:20:42.711591 1481947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1272514.pem
	I1002 22:20:42.752854 1481947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1272514.pem /etc/ssl/certs/51391683.0"
	I1002 22:20:42.760923 1481947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 22:20:42.764699 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:20:42.806191 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:20:42.847801 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:20:42.888953 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:20:42.930010 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:20:42.983935 1481947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:20:43.081394 1481947 kubeadm.go:400] StartCluster: {Name:newest-cni-007061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-007061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 22:20:43.081541 1481947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:20:43.081647 1481947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:20:43.146585 1481947 cri.go:89] found id: "1647dd59c85ece18e477c3ed89cf0b1d6fdff809cbd56718dcfcb97b915b83e4"
	I1002 22:20:43.146650 1481947 cri.go:89] found id: "b1257b68a4678d0075bb779bbc3dab7c4d2a1f18219e8d82946226a0adaa9b98"
	I1002 22:20:43.146667 1481947 cri.go:89] found id: ""
	I1002 22:20:43.146757 1481947 ssh_runner.go:195] Run: sudo runc list -f json
	W1002 22:20:43.187663 1481947 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:20:43Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:20:43.187830 1481947 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:20:43.201550 1481947 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 22:20:43.201611 1481947 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 22:20:43.201703 1481947 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:20:43.211977 1481947 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:20:43.212623 1481947 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-007061" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:20:43.212936 1481947 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-1270657/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-007061" cluster setting kubeconfig missing "newest-cni-007061" context setting]
	I1002 22:20:43.213415 1481947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:43.215073 1481947 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:20:43.232303 1481947 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1002 22:20:43.232378 1481947 kubeadm.go:601] duration metric: took 30.740482ms to restartPrimaryControlPlane
	I1002 22:20:43.232401 1481947 kubeadm.go:402] duration metric: took 151.01774ms to StartCluster
	I1002 22:20:43.232440 1481947 settings.go:142] acquiring lock: {Name:mk0f3b7f838a9793ea5b0055e2a25ff28c15c752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:43.232523 1481947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:20:43.233478 1481947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/kubeconfig: {Name:mk2efc028fc04dad09eb7c4801fbcc9b2ad7e771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:43.233741 1481947 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:20:43.234157 1481947 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 22:20:43.234232 1481947 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-007061"
	I1002 22:20:43.234246 1481947 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-007061"
	W1002 22:20:43.234252 1481947 addons.go:247] addon storage-provisioner should already be in state true
	I1002 22:20:43.234272 1481947 host.go:66] Checking if "newest-cni-007061" exists ...
	I1002 22:20:43.234731 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:43.234894 1481947 config.go:182] Loaded profile config "newest-cni-007061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:43.234978 1481947 addons.go:69] Setting dashboard=true in profile "newest-cni-007061"
	I1002 22:20:43.235009 1481947 addons.go:238] Setting addon dashboard=true in "newest-cni-007061"
	W1002 22:20:43.235032 1481947 addons.go:247] addon dashboard should already be in state true
	I1002 22:20:43.235079 1481947 host.go:66] Checking if "newest-cni-007061" exists ...
	I1002 22:20:43.235510 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:43.237081 1481947 addons.go:69] Setting default-storageclass=true in profile "newest-cni-007061"
	I1002 22:20:43.237116 1481947 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-007061"
	I1002 22:20:43.237887 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:43.242087 1481947 out.go:179] * Verifying Kubernetes components...
	I1002 22:20:43.252336 1481947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:20:43.286077 1481947 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 22:20:43.292111 1481947 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 22:20:43.292221 1481947 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:20:43.292231 1481947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 22:20:43.292317 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:43.303253 1481947 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1002 22:20:40.918171 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	W1002 22:20:42.918288 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	I1002 22:20:43.303852 1481947 addons.go:238] Setting addon default-storageclass=true in "newest-cni-007061"
	W1002 22:20:43.303870 1481947 addons.go:247] addon default-storageclass should already be in state true
	I1002 22:20:43.303922 1481947 host.go:66] Checking if "newest-cni-007061" exists ...
	I1002 22:20:43.304367 1481947 cli_runner.go:164] Run: docker container inspect newest-cni-007061 --format={{.State.Status}}
	I1002 22:20:43.306443 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 22:20:43.306472 1481947 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 22:20:43.306542 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:43.349999 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:43.353339 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:43.362476 1481947 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 22:20:43.362504 1481947 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 22:20:43.362568 1481947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-007061
	I1002 22:20:43.391702 1481947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34596 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/newest-cni-007061/id_rsa Username:docker}
	I1002 22:20:43.612370 1481947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 22:20:43.618406 1481947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 22:20:43.648661 1481947 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:20:43.648810 1481947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:20:43.657952 1481947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 22:20:43.682411 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 22:20:43.682486 1481947 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 22:20:43.772078 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 22:20:43.772150 1481947 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 22:20:43.843086 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 22:20:43.843164 1481947 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 22:20:43.875147 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 22:20:43.875220 1481947 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 22:20:43.896454 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 22:20:43.896532 1481947 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 22:20:43.913104 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 22:20:43.913179 1481947 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 22:20:43.939077 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 22:20:43.939154 1481947 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 22:20:43.961404 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 22:20:43.961480 1481947 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 22:20:43.987274 1481947 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 22:20:43.987347 1481947 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 22:20:44.013862 1481947 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1002 22:20:44.920721 1477309 pod_ready.go:104] pod "coredns-66bc5c9577-rj4bn" is not "Ready", error: <nil>
	I1002 22:20:46.420269 1477309 pod_ready.go:94] pod "coredns-66bc5c9577-rj4bn" is "Ready"
	I1002 22:20:46.420302 1477309 pod_ready.go:86] duration metric: took 36.508388071s for pod "coredns-66bc5c9577-rj4bn" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.425264 1477309 pod_ready.go:83] waiting for pod "etcd-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.435794 1477309 pod_ready.go:94] pod "etcd-no-preload-975002" is "Ready"
	I1002 22:20:46.435836 1477309 pod_ready.go:86] duration metric: took 10.463097ms for pod "etcd-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.439664 1477309 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.451808 1477309 pod_ready.go:94] pod "kube-apiserver-no-preload-975002" is "Ready"
	I1002 22:20:46.451833 1477309 pod_ready.go:86] duration metric: took 12.140679ms for pod "kube-apiserver-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.459644 1477309 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.616674 1477309 pod_ready.go:94] pod "kube-controller-manager-no-preload-975002" is "Ready"
	I1002 22:20:46.616774 1477309 pod_ready.go:86] duration metric: took 157.04456ms for pod "kube-controller-manager-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:46.817107 1477309 pod_ready.go:83] waiting for pod "kube-proxy-lzzt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:47.215834 1477309 pod_ready.go:94] pod "kube-proxy-lzzt4" is "Ready"
	I1002 22:20:47.215859 1477309 pod_ready.go:86] duration metric: took 398.6635ms for pod "kube-proxy-lzzt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:47.416237 1477309 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:47.816336 1477309 pod_ready.go:94] pod "kube-scheduler-no-preload-975002" is "Ready"
	I1002 22:20:47.816369 1477309 pod_ready.go:86] duration metric: took 400.096615ms for pod "kube-scheduler-no-preload-975002" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 22:20:47.816389 1477309 pod_ready.go:40] duration metric: took 37.908835354s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 22:20:47.922261 1477309 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:20:47.925640 1477309 out.go:179] * Done! kubectl is now configured to use "no-preload-975002" cluster and "default" namespace by default
	I1002 22:20:48.952016 1481947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.333529152s)
	I1002 22:20:48.952379 1481947 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.303511266s)
	I1002 22:20:48.952403 1481947 api_server.go:72] duration metric: took 5.718614601s to wait for apiserver process to appear ...
	I1002 22:20:48.952409 1481947 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:20:48.952422 1481947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:20:48.990503 1481947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 22:20:48.990533 1481947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 22:20:49.453183 1481947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:20:49.461852 1481947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 22:20:49.461891 1481947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 22:20:49.709398 1481947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.051363098s)
	I1002 22:20:49.709508 1481947 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.695569927s)
	I1002 22:20:49.712610 1481947 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-007061 addons enable metrics-server
	
	I1002 22:20:49.715692 1481947 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1002 22:20:49.718582 1481947 addons.go:514] duration metric: took 6.484405302s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1002 22:20:49.953280 1481947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:20:49.970201 1481947 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 22:20:49.970236 1481947 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 22:20:50.452801 1481947 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1002 22:20:50.461990 1481947 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1002 22:20:50.463247 1481947 api_server.go:141] control plane version: v1.34.1
	I1002 22:20:50.463277 1481947 api_server.go:131] duration metric: took 1.510860282s to wait for apiserver health ...
	I1002 22:20:50.463287 1481947 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:20:50.467241 1481947 system_pods.go:59] 8 kube-system pods found
	I1002 22:20:50.467284 1481947 system_pods.go:61] "coredns-66bc5c9577-h7pp8" [c67b11a7-df6e-47e6-ad4f-d31506dd89b3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 22:20:50.467295 1481947 system_pods.go:61] "etcd-newest-cni-007061" [d0ab93e4-ccf4-4b37-9203-848cd4c28976] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 22:20:50.467317 1481947 system_pods.go:61] "kindnet-h2fvd" [b797cb62-e31f-4e5b-825b-81189902db5f] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1002 22:20:50.467330 1481947 system_pods.go:61] "kube-apiserver-newest-cni-007061" [6aec1d67-1e4b-4386-b8a8-4ff00284349f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 22:20:50.467337 1481947 system_pods.go:61] "kube-controller-manager-newest-cni-007061" [d00bdb1f-2170-4f7e-815a-5deb837a0264] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 22:20:50.467352 1481947 system_pods.go:61] "kube-proxy-m892s" [1995a215-d278-4c89-b447-26a58362aab5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 22:20:50.467360 1481947 system_pods.go:61] "kube-scheduler-newest-cni-007061" [c7a188d8-ffeb-4928-91de-f1bcd7a14aa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 22:20:50.467373 1481947 system_pods.go:61] "storage-provisioner" [063bf5bf-9b28-43fe-9f9e-f76c3dc4bd44] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 22:20:50.467389 1481947 system_pods.go:74] duration metric: took 4.089773ms to wait for pod list to return data ...
	I1002 22:20:50.467406 1481947 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:20:50.471487 1481947 default_sa.go:45] found service account: "default"
	I1002 22:20:50.471514 1481947 default_sa.go:55] duration metric: took 4.101932ms for default service account to be created ...
	I1002 22:20:50.471527 1481947 kubeadm.go:586] duration metric: took 7.23773756s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 22:20:50.471545 1481947 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:20:50.474402 1481947 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:20:50.474450 1481947 node_conditions.go:123] node cpu capacity is 2
	I1002 22:20:50.474462 1481947 node_conditions.go:105] duration metric: took 2.912891ms to run NodePressure ...
	I1002 22:20:50.474474 1481947 start.go:241] waiting for startup goroutines ...
	I1002 22:20:50.474483 1481947 start.go:246] waiting for cluster config update ...
	I1002 22:20:50.474495 1481947 start.go:255] writing updated cluster config ...
	I1002 22:20:50.474801 1481947 ssh_runner.go:195] Run: rm -f paused
	I1002 22:20:50.539853 1481947 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 22:20:50.545839 1481947 out.go:179] * Done! kubectl is now configured to use "newest-cni-007061" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.739760973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.748687146Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-m892s/POD" id=cbeda2e6-f8d9-410a-85d9-77d82b63e74c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.748759596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.750063441Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=208036e7-d3d1-4cde-a328-ce2b7f67b8ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.752518806Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cbeda2e6-f8d9-410a-85d9-77d82b63e74c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.759539881Z" level=info msg="Ran pod sandbox b1bad2370f1aebbd229bdd7cdb63e0aef331d7fe74207f2d3860a5964ab66010 with infra container: kube-system/kindnet-h2fvd/POD" id=208036e7-d3d1-4cde-a328-ce2b7f67b8ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.761153538Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9af9d48e-f35d-4a7a-bf7f-f78b8f4e3f86 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.77252509Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3430c1c6-58ef-4111-9cf5-cc8916a05c37 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.775685689Z" level=info msg="Ran pod sandbox 76ec07d9b07eee8410e703931031e6744383a583b1b340f147cda4ec48f4320d with infra container: kube-system/kube-proxy-m892s/POD" id=cbeda2e6-f8d9-410a-85d9-77d82b63e74c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.77605429Z" level=info msg="Creating container: kube-system/kindnet-h2fvd/kindnet-cni" id=a2f0886b-0509-4ba5-b9ec-259a9a2bf5fc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.78108647Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7bbe5355-d54f-4edd-9b09-8ce0bae3fe1e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.784342714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.790206543Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2d10a1d8-6297-4180-9c93-850a4441461b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.791658496Z" level=info msg="Creating container: kube-system/kube-proxy-m892s/kube-proxy" id=e484d3b1-eae0-4e8e-a9ff-0ccc1b190660 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.792131611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.796448727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.797109737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.80338872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.804083329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.829631716Z" level=info msg="Created container 4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be: kube-system/kindnet-h2fvd/kindnet-cni" id=a2f0886b-0509-4ba5-b9ec-259a9a2bf5fc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.830382431Z" level=info msg="Starting container: 4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be" id=c74875da-01bf-4dc1-91d2-0e53c65d2388 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.8324034Z" level=info msg="Started container" PID=1059 containerID=4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be description=kube-system/kindnet-h2fvd/kindnet-cni id=c74875da-01bf-4dc1-91d2-0e53c65d2388 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1bad2370f1aebbd229bdd7cdb63e0aef331d7fe74207f2d3860a5964ab66010
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.84579798Z" level=info msg="Created container fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71: kube-system/kube-proxy-m892s/kube-proxy" id=e484d3b1-eae0-4e8e-a9ff-0ccc1b190660 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.846925976Z" level=info msg="Starting container: fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71" id=b3bf8440-8d51-47ba-ab3c-dd75d14ea95e name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:20:49 newest-cni-007061 crio[613]: time="2025-10-02T22:20:49.851038279Z" level=info msg="Started container" PID=1063 containerID=fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71 description=kube-system/kube-proxy-m892s/kube-proxy id=b3bf8440-8d51-47ba-ab3c-dd75d14ea95e name=/runtime.v1.RuntimeService/StartContainer sandboxID=76ec07d9b07eee8410e703931031e6744383a583b1b340f147cda4ec48f4320d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fd93b88e0fbf0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   76ec07d9b07ee       kube-proxy-m892s                            kube-system
	4dc95cb82fa9c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   b1bad2370f1ae       kindnet-h2fvd                               kube-system
	0f7ed4457e4dd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   c44eed6c6ce85       kube-scheduler-newest-cni-007061            kube-system
	2a391833bd371       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   5ebf973142ff8       etcd-newest-cni-007061                      kube-system
	1647dd59c85ec       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   a13911fb27326       kube-controller-manager-newest-cni-007061   kube-system
	b1257b68a4678       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   d263437eac2ed       kube-apiserver-newest-cni-007061            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-007061
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-007061
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=newest-cni-007061
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_20_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:20:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-007061
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:20:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:20:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-007061
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 aca5a2dd38314baeac04f0600b7f0a8f
	  System UUID:                dd20f051-28eb-4702-9fc4-2f1d38d2bf49
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-007061                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-h2fvd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-007061             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-007061    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-m892s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-007061             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node newest-cni-007061 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-007061 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-007061 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 44s                kubelet          Starting kubelet.
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-007061 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-007061 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-007061 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-007061 event: Registered Node newest-cni-007061 in Controller
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-007061 event: Registered Node newest-cni-007061 in Controller
	
	
	==> dmesg <==
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:18] overlayfs: idmapped layers are currently not supported
	[ +12.782696] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:20] overlayfs: idmapped layers are currently not supported
	[ +29.672765] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2a391833bd371b962840f5da1a5dd64f92ab4d26ed844dd8d9839166a6995da0] <==
	{"level":"warn","ts":"2025-10-02T22:20:46.362934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.385095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.422501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.478218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.502468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.529934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.563652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.614786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.647512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.686434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.726507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.762211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.804084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.922664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.927046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.964501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:46.995011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.037028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.074015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.117233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.167991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.218459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.265637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.284690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:47.461492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39372","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:20:56 up  7:03,  0 user,  load average: 4.53, 3.80, 2.73
	Linux newest-cni-007061 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4dc95cb82fa9c9dd295c73c5e19b6a594d6e08ba376187d9abc738dfe639d2be] <==
	I1002 22:20:50.009983       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:20:50.011842       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1002 22:20:50.012010       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:20:50.012031       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:20:50.012047       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:20:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:20:50.210586       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:20:50.210668       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:20:50.210705       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:20:50.211652       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [b1257b68a4678d0075bb779bbc3dab7c4d2a1f18219e8d82946226a0adaa9b98] <==
	I1002 22:20:48.724050       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:20:48.733513       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:20:48.733544       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:20:48.733734       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 22:20:48.733817       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 22:20:48.733857       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1002 22:20:48.734532       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 22:20:48.744440       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 22:20:48.767329       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1002 22:20:48.770106       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1002 22:20:48.773957       1 aggregator.go:171] initial CRD sync complete...
	I1002 22:20:48.773979       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 22:20:48.773986       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:20:48.773994       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:20:49.297620       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 22:20:49.336684       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:20:49.366740       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:20:49.377983       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:20:49.386818       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:20:49.437421       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:20:49.497731       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.172.151"}
	I1002 22:20:49.534485       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.133.207"}
	I1002 22:20:52.007538       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 22:20:52.308124       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 22:20:52.567963       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1647dd59c85ece18e477c3ed89cf0b1d6fdff809cbd56718dcfcb97b915b83e4] <==
	I1002 22:20:52.007754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 22:20:52.012046       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 22:20:52.015720       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 22:20:52.016014       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:20:52.016073       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 22:20:52.020063       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1002 22:20:52.028094       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 22:20:52.028143       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 22:20:52.028169       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 22:20:52.028174       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 22:20:52.028179       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 22:20:52.031976       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 22:20:52.035586       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 22:20:52.043783       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:20:52.050783       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 22:20:52.050826       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1002 22:20:52.050906       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 22:20:52.051003       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 22:20:52.051428       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 22:20:52.051871       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:20:52.051973       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:20:52.058242       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 22:20:52.060988       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:20:52.061016       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:20:52.061023       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [fd93b88e0fbf0e210fe52a2ea6b4a1c691eb246d5351d8be1b25474bcffefa71] <==
	I1002 22:20:49.963454       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:20:50.062232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:20:50.163327       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:20:50.163363       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1002 22:20:50.163458       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:20:50.182268       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:20:50.182340       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:20:50.185778       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:20:50.187793       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:20:50.187867       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:20:50.193309       1 config.go:200] "Starting service config controller"
	I1002 22:20:50.193422       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:20:50.193492       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:20:50.193548       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:20:50.193593       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:20:50.193641       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:20:50.193946       1 config.go:309] "Starting node config controller"
	I1002 22:20:50.194012       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:20:50.194495       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:20:50.293837       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:20:50.293835       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:20:50.293855       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0f7ed4457e4dd510d818c61d551390c715d5526db66159ff7ebad267e8eeae6c] <==
	I1002 22:20:45.453720       1 serving.go:386] Generated self-signed cert in-memory
	I1002 22:20:48.914384       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 22:20:48.920581       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:20:48.935214       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:20:48.939941       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 22:20:48.939976       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 22:20:48.940006       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 22:20:48.940710       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:48.940728       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:48.942785       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:20:48.942797       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:20:49.040144       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 22:20:49.040833       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:49.043825       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.627554     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.865693     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.865805     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.865848     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: E1002 22:20:48.866147     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-007061\" already exists" pod="kube-system/etcd-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.866164     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.866957     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: E1002 22:20:48.918878     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-007061\" already exists" pod="kube-system/kube-apiserver-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.918928     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: E1002 22:20:48.933086     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-007061\" already exists" pod="kube-system/kube-controller-manager-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: I1002 22:20:48.933150     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-007061"
	Oct 02 22:20:48 newest-cni-007061 kubelet[728]: E1002 22:20:48.997076     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-007061\" already exists" pod="kube-system/kube-scheduler-newest-cni-007061"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.428803     728 apiserver.go:52] "Watching apiserver"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.527489     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.554874     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b797cb62-e31f-4e5b-825b-81189902db5f-xtables-lock\") pod \"kindnet-h2fvd\" (UID: \"b797cb62-e31f-4e5b-825b-81189902db5f\") " pod="kube-system/kindnet-h2fvd"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.554940     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1995a215-d278-4c89-b447-26a58362aab5-lib-modules\") pod \"kube-proxy-m892s\" (UID: \"1995a215-d278-4c89-b447-26a58362aab5\") " pod="kube-system/kube-proxy-m892s"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.554966     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b797cb62-e31f-4e5b-825b-81189902db5f-cni-cfg\") pod \"kindnet-h2fvd\" (UID: \"b797cb62-e31f-4e5b-825b-81189902db5f\") " pod="kube-system/kindnet-h2fvd"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.554985     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b797cb62-e31f-4e5b-825b-81189902db5f-lib-modules\") pod \"kindnet-h2fvd\" (UID: \"b797cb62-e31f-4e5b-825b-81189902db5f\") " pod="kube-system/kindnet-h2fvd"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.555012     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1995a215-d278-4c89-b447-26a58362aab5-xtables-lock\") pod \"kube-proxy-m892s\" (UID: \"1995a215-d278-4c89-b447-26a58362aab5\") " pod="kube-system/kube-proxy-m892s"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: I1002 22:20:49.587874     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 22:20:49 newest-cni-007061 kubelet[728]: W1002 22:20:49.774143     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3375b860c99557ab845f1afa4812febd1b2c36909b009e0c8609075998cceb01/crio-76ec07d9b07eee8410e703931031e6744383a583b1b340f147cda4ec48f4320d WatchSource:0}: Error finding container 76ec07d9b07eee8410e703931031e6744383a583b1b340f147cda4ec48f4320d: Status 404 returned error can't find the container with id 76ec07d9b07eee8410e703931031e6744383a583b1b340f147cda4ec48f4320d
	Oct 02 22:20:51 newest-cni-007061 kubelet[728]: I1002 22:20:51.718819     728 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 02 22:20:51 newest-cni-007061 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:20:51 newest-cni-007061 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:20:51 newest-cni-007061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-007061 -n newest-cni-007061
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-007061 -n newest-cni-007061: exit status 2 (371.990381ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-007061 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h7pp8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7j6xr kubernetes-dashboard-855c9754f9-tdp9m
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-007061 describe pod coredns-66bc5c9577-h7pp8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7j6xr kubernetes-dashboard-855c9754f9-tdp9m
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-007061 describe pod coredns-66bc5c9577-h7pp8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7j6xr kubernetes-dashboard-855c9754f9-tdp9m: exit status 1 (86.386219ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h7pp8" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-7j6xr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-tdp9m" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-007061 describe pod coredns-66bc5c9577-h7pp8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7j6xr kubernetes-dashboard-855c9754f9-tdp9m: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-975002 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-975002 --alsologtostderr -v=1: exit status 80 (2.322638873s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-975002 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:20:59.917476 1485275 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:20:59.917741 1485275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:59.917790 1485275 out.go:374] Setting ErrFile to fd 2...
	I1002 22:20:59.917810 1485275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:59.918236 1485275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:20:59.918612 1485275 out.go:368] Setting JSON to false
	I1002 22:20:59.918683 1485275 mustload.go:65] Loading cluster: no-preload-975002
	I1002 22:20:59.919309 1485275 config.go:182] Loaded profile config "no-preload-975002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:59.920181 1485275 cli_runner.go:164] Run: docker container inspect no-preload-975002 --format={{.State.Status}}
	I1002 22:20:59.941896 1485275 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:20:59.942247 1485275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:21:00.061081 1485275 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:55 SystemTime:2025-10-02 22:21:00.028457158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:21:00.061834 1485275 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-975002 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1002 22:21:00.065797 1485275 out.go:179] * Pausing node no-preload-975002 ... 
	I1002 22:21:00.072330 1485275 host.go:66] Checking if "no-preload-975002" exists ...
	I1002 22:21:00.072728 1485275 ssh_runner.go:195] Run: systemctl --version
	I1002 22:21:00.072778 1485275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-975002
	I1002 22:21:00.123422 1485275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/no-preload-975002/id_rsa Username:docker}
	I1002 22:21:00.326903 1485275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:21:00.351452 1485275 pause.go:51] kubelet running: true
	I1002 22:21:00.351537 1485275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:21:00.693275 1485275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:21:00.693374 1485275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:21:00.781149 1485275 cri.go:89] found id: "7502d4f4598c15d0639e48931410f1dfd548e8ab6d7cb9f57a61c196c9b65208"
	I1002 22:21:00.781169 1485275 cri.go:89] found id: "173dda373de876dc4ee0e0d27400b6b42ec182e713432709b586375f99657c3a"
	I1002 22:21:00.781173 1485275 cri.go:89] found id: "0281f8cdfcd2031ec210a895649cebd0ffefc9c6ebb75564bbddb8613f810d4d"
	I1002 22:21:00.781179 1485275 cri.go:89] found id: "15627b8315997731057367b41a0646bd5db72708d57cc3b8fccc27f79a99dc86"
	I1002 22:21:00.781182 1485275 cri.go:89] found id: "2e1b89716b7c8f048407a03f59e1e05cb8799115581f388a5b987cc1e38cb88c"
	I1002 22:21:00.781186 1485275 cri.go:89] found id: "1a8916419f45f0470c9355a6ffd477a3655ab0372337b6555a257fc3e14a31b1"
	I1002 22:21:00.781188 1485275 cri.go:89] found id: "43a71ced78bb7b029ec9c280521a8a4787f1bb401f876ef1e427eb9e5ed28915"
	I1002 22:21:00.781192 1485275 cri.go:89] found id: "6e21bf887b8a4d294f36a9fe682b89471c2cb8e0efa90f34ee105c42e3a4ed56"
	I1002 22:21:00.781195 1485275 cri.go:89] found id: "61c7aad70fed03ac13b24399a7eaea3aa68143333b96d64b5c76e9c67929d289"
	I1002 22:21:00.781201 1485275 cri.go:89] found id: "0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a"
	I1002 22:21:00.781204 1485275 cri.go:89] found id: "15a6e97f7aa572e9bc1e965b413f2a120067afd70f77716ef63f1f153dee8cf2"
	I1002 22:21:00.781207 1485275 cri.go:89] found id: ""
	I1002 22:21:00.781256 1485275 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:21:00.793903 1485275 retry.go:31] will retry after 327.492503ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:21:00Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:21:01.122163 1485275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:21:01.147133 1485275 pause.go:51] kubelet running: false
	I1002 22:21:01.147197 1485275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:21:01.377243 1485275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:21:01.377324 1485275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:21:01.478071 1485275 cri.go:89] found id: "7502d4f4598c15d0639e48931410f1dfd548e8ab6d7cb9f57a61c196c9b65208"
	I1002 22:21:01.478090 1485275 cri.go:89] found id: "173dda373de876dc4ee0e0d27400b6b42ec182e713432709b586375f99657c3a"
	I1002 22:21:01.478095 1485275 cri.go:89] found id: "0281f8cdfcd2031ec210a895649cebd0ffefc9c6ebb75564bbddb8613f810d4d"
	I1002 22:21:01.478098 1485275 cri.go:89] found id: "15627b8315997731057367b41a0646bd5db72708d57cc3b8fccc27f79a99dc86"
	I1002 22:21:01.478101 1485275 cri.go:89] found id: "2e1b89716b7c8f048407a03f59e1e05cb8799115581f388a5b987cc1e38cb88c"
	I1002 22:21:01.478122 1485275 cri.go:89] found id: "1a8916419f45f0470c9355a6ffd477a3655ab0372337b6555a257fc3e14a31b1"
	I1002 22:21:01.478125 1485275 cri.go:89] found id: "43a71ced78bb7b029ec9c280521a8a4787f1bb401f876ef1e427eb9e5ed28915"
	I1002 22:21:01.478128 1485275 cri.go:89] found id: "6e21bf887b8a4d294f36a9fe682b89471c2cb8e0efa90f34ee105c42e3a4ed56"
	I1002 22:21:01.478131 1485275 cri.go:89] found id: "61c7aad70fed03ac13b24399a7eaea3aa68143333b96d64b5c76e9c67929d289"
	I1002 22:21:01.478142 1485275 cri.go:89] found id: "0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a"
	I1002 22:21:01.478145 1485275 cri.go:89] found id: "15a6e97f7aa572e9bc1e965b413f2a120067afd70f77716ef63f1f153dee8cf2"
	I1002 22:21:01.478148 1485275 cri.go:89] found id: ""
	I1002 22:21:01.478199 1485275 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:21:01.491706 1485275 retry.go:31] will retry after 334.157266ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:21:01Z" level=error msg="open /run/runc: no such file or directory"
	I1002 22:21:01.826197 1485275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:21:01.839898 1485275 pause.go:51] kubelet running: false
	I1002 22:21:01.839980 1485275 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1002 22:21:02.033530 1485275 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1002 22:21:02.033612 1485275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1002 22:21:02.120241 1485275 cri.go:89] found id: "7502d4f4598c15d0639e48931410f1dfd548e8ab6d7cb9f57a61c196c9b65208"
	I1002 22:21:02.120267 1485275 cri.go:89] found id: "173dda373de876dc4ee0e0d27400b6b42ec182e713432709b586375f99657c3a"
	I1002 22:21:02.120272 1485275 cri.go:89] found id: "0281f8cdfcd2031ec210a895649cebd0ffefc9c6ebb75564bbddb8613f810d4d"
	I1002 22:21:02.120275 1485275 cri.go:89] found id: "15627b8315997731057367b41a0646bd5db72708d57cc3b8fccc27f79a99dc86"
	I1002 22:21:02.120279 1485275 cri.go:89] found id: "2e1b89716b7c8f048407a03f59e1e05cb8799115581f388a5b987cc1e38cb88c"
	I1002 22:21:02.120283 1485275 cri.go:89] found id: "1a8916419f45f0470c9355a6ffd477a3655ab0372337b6555a257fc3e14a31b1"
	I1002 22:21:02.120286 1485275 cri.go:89] found id: "43a71ced78bb7b029ec9c280521a8a4787f1bb401f876ef1e427eb9e5ed28915"
	I1002 22:21:02.120289 1485275 cri.go:89] found id: "6e21bf887b8a4d294f36a9fe682b89471c2cb8e0efa90f34ee105c42e3a4ed56"
	I1002 22:21:02.120292 1485275 cri.go:89] found id: "61c7aad70fed03ac13b24399a7eaea3aa68143333b96d64b5c76e9c67929d289"
	I1002 22:21:02.120298 1485275 cri.go:89] found id: "0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a"
	I1002 22:21:02.120301 1485275 cri.go:89] found id: "15a6e97f7aa572e9bc1e965b413f2a120067afd70f77716ef63f1f153dee8cf2"
	I1002 22:21:02.120304 1485275 cri.go:89] found id: ""
	I1002 22:21:02.120356 1485275 ssh_runner.go:195] Run: sudo runc list -f json
	I1002 22:21:02.138459 1485275 out.go:203] 
	W1002 22:21:02.142010 1485275 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:21:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T22:21:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1002 22:21:02.142174 1485275 out.go:285] * 
	* 
	W1002 22:21:02.152727 1485275 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:21:02.156110 1485275 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-975002 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-975002
helpers_test.go:243: (dbg) docker inspect no-preload-975002:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763",
	        "Created": "2025-10-02T22:18:14.898358454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1477776,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:19:49.975438972Z",
	            "FinishedAt": "2025-10-02T22:19:47.310226213Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/hostname",
	        "HostsPath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/hosts",
	        "LogPath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763-json.log",
	        "Name": "/no-preload-975002",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-975002:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-975002",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763",
	                "LowerDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-975002",
	                "Source": "/var/lib/docker/volumes/no-preload-975002/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-975002",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-975002",
	                "name.minikube.sigs.k8s.io": "no-preload-975002",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18b04379e76db7c09f7423cc7a3ee4bf9ac9aa6795a8e90d88baebe675e77853",
	            "SandboxKey": "/var/run/docker/netns/18b04379e76d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34591"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34592"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34595"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34593"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34594"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-975002": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:f5:c6:bb:56:aa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bdf5cacca0e9b177bb286cc49833ddf6be2feeac26e6da7eb90c632741658614",
	                    "EndpointID": "d5c6eae5878b15114a59f017c441e517de2a0d172f6798d949d461c29055b6c3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-975002",
	                        "845f3e6dfe04"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-975002 -n no-preload-975002
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-975002 -n no-preload-975002: exit status 2 (421.279644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-975002 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-975002 logs -n 25: (1.779631771s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-607037                                                                                                                                                                                                               │ disable-driver-mounts-607037 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ stop    │ -p embed-certs-080134 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable dashboard -p embed-certs-080134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ image   │ embed-certs-080134 image list --format=json                                                                                                                                                                                                   │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ pause   │ -p embed-certs-080134 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ stop    │ -p no-preload-975002 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ delete  │ -p embed-certs-080134                                                                                                                                                                                                                         │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ delete  │ -p embed-certs-080134                                                                                                                                                                                                                         │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ start   │ -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable dashboard -p no-preload-975002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-007061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	│ stop    │ -p newest-cni-007061 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-007061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ start   │ -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ image   │ newest-cni-007061 image list --format=json                                                                                                                                                                                                    │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ pause   │ -p newest-cni-007061 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	│ delete  │ -p newest-cni-007061                                                                                                                                                                                                                          │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ delete  │ -p newest-cni-007061                                                                                                                                                                                                                          │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ start   │ -p auto-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-198170                  │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	│ image   │ no-preload-975002 image list --format=json                                                                                                                                                                                                    │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ pause   │ -p no-preload-975002 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:20:59
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:20:59.618064 1485171 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:20:59.618659 1485171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:59.618695 1485171 out.go:374] Setting ErrFile to fd 2...
	I1002 22:20:59.618716 1485171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:59.619022 1485171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:20:59.619533 1485171 out.go:368] Setting JSON to false
	I1002 22:20:59.620612 1485171 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25385,"bootTime":1759418275,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:20:59.620707 1485171 start.go:140] virtualization:  
	I1002 22:20:59.624692 1485171 out.go:179] * [auto-198170] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:20:59.628084 1485171 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:20:59.628152 1485171 notify.go:220] Checking for updates...
	I1002 22:20:59.634466 1485171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:20:59.639023 1485171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:20:59.650906 1485171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:20:59.654060 1485171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:20:59.657053 1485171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:20:59.660531 1485171 config.go:182] Loaded profile config "no-preload-975002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:59.660637 1485171 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:20:59.709304 1485171 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:20:59.709444 1485171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:20:59.778051 1485171 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:20:59.767792263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:20:59.778586 1485171 docker.go:318] overlay module found
	I1002 22:20:59.781773 1485171 out.go:179] * Using the docker driver based on user configuration
	I1002 22:20:59.784660 1485171 start.go:304] selected driver: docker
	I1002 22:20:59.784680 1485171 start.go:924] validating driver "docker" against <nil>
	I1002 22:20:59.784694 1485171 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:20:59.785417 1485171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:20:59.867540 1485171 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:20:59.856740996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:20:59.867695 1485171 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 22:20:59.867940 1485171 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:20:59.871158 1485171 out.go:179] * Using Docker driver with root privileges
	I1002 22:20:59.874137 1485171 cni.go:84] Creating CNI manager for ""
	I1002 22:20:59.874211 1485171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:20:59.874220 1485171 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 22:20:59.874290 1485171 start.go:348] cluster config:
	{Name:auto-198170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-198170 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1002 22:20:59.877471 1485171 out.go:179] * Starting "auto-198170" primary control-plane node in "auto-198170" cluster
	I1002 22:20:59.880324 1485171 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:20:59.883238 1485171 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:20:59.886172 1485171 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:20:59.886229 1485171 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:20:59.886238 1485171 cache.go:58] Caching tarball of preloaded images
	I1002 22:20:59.886338 1485171 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:20:59.886348 1485171 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:20:59.886457 1485171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/config.json ...
	I1002 22:20:59.886474 1485171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/config.json: {Name:mk6300f2ea222d58746b17b9cf7fbc4ed827bc4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:59.886632 1485171 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:20:59.919721 1485171 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:20:59.919751 1485171 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:20:59.919764 1485171 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:20:59.919791 1485171 start.go:360] acquireMachinesLock for auto-198170: {Name:mk80855349d994dcd47e5c288301f994f739c8f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:20:59.919904 1485171 start.go:364] duration metric: took 84.38µs to acquireMachinesLock for "auto-198170"
	I1002 22:20:59.919968 1485171 start.go:93] Provisioning new machine with config: &{Name:auto-198170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-198170 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:20:59.920050 1485171 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.911707768Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.916055661Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.916092436Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.916114794Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.919469622Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.919502942Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.919528804Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.924221282Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.924252239Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.924268739Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.928465644Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.928501787Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.968283426Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e5098a0f-e69b-429a-99eb-399fdfb87719 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.970090056Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=88e48e0c-385f-42b9-8610-26797ec96f45 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.972598999Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d/dashboard-metrics-scraper" id=a2f23cca-2e42-4f75-ab7f-4ac645c9c5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.972907974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.986395205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.987074503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.022543712Z" level=info msg="Created container 0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d/dashboard-metrics-scraper" id=a2f23cca-2e42-4f75-ab7f-4ac645c9c5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.027787662Z" level=info msg="Starting container: 0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a" id=83965da1-2b38-4046-97f1-e50b21d818be name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.031179535Z" level=info msg="Started container" PID=1711 containerID=0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d/dashboard-metrics-scraper id=83965da1-2b38-4046-97f1-e50b21d818be name=/runtime.v1.RuntimeService/StartContainer sandboxID=559a5c33f1fa06a8fc43da33093610591f6089809031a7705678b16f5bdbb37c
	Oct 02 22:20:59 no-preload-975002 conmon[1709]: conmon 0b355d3687872eb4c889 <ninfo>: container 1711 exited with status 1
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.373999267Z" level=info msg="Removing container: 70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6" id=7c8bcb76-7cdb-4ec3-ac18-294e46a99bab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.393832705Z" level=info msg="Error loading conmon cgroup of container 70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6: cgroup deleted" id=7c8bcb76-7cdb-4ec3-ac18-294e46a99bab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.397238731Z" level=info msg="Removed container 70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d/dashboard-metrics-scraper" id=7c8bcb76-7cdb-4ec3-ac18-294e46a99bab name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0b355d3687872       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago        Exited              dashboard-metrics-scraper   3                   559a5c33f1fa0       dashboard-metrics-scraper-6ffb444bf9-qnh5d   kubernetes-dashboard
	7502d4f4598c1       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   aaeb02b2212e1       storage-provisioner                          kube-system
	15a6e97f7aa57       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   c7cccecc27fa4       kubernetes-dashboard-855c9754f9-ns2nz        kubernetes-dashboard
	7acf5b78a8af2       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   6797afd588169       busybox                                      default
	173dda373de87       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   c43133531face       coredns-66bc5c9577-rj4bn                     kube-system
	0281f8cdfcd20       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   ea3605a35e18a       kindnet-hpq6g                                kube-system
	15627b8315997       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   009a823d4ed92       kube-proxy-lzzt4                             kube-system
	2e1b89716b7c8       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   aaeb02b2212e1       storage-provisioner                          kube-system
	1a8916419f45f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b17f106f886f8       kube-apiserver-no-preload-975002             kube-system
	43a71ced78bb7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   03f3c9a85834c       kube-controller-manager-no-preload-975002    kube-system
	6e21bf887b8a4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   f9a78f8258630       etcd-no-preload-975002                       kube-system
	61c7aad70fed0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0a0fba5b05056       kube-scheduler-no-preload-975002             kube-system
	
	
	==> coredns [173dda373de876dc4ee0e0d27400b6b42ec182e713432709b586375f99657c3a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58867 - 44939 "HINFO IN 7357301730326203011.6127420094347801142. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017555901s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-975002
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-975002
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=no-preload-975002
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_19_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:18:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-975002
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:20:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:18:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:18:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:18:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:19:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-975002
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d4821ac97e143b6a85d624f1b145104
	  System UUID:                c00f53d9-fad4-4c59-816a-d3b3d9ec8fa6
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-rj4bn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     118s
	  kube-system                 etcd-no-preload-975002                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m6s
	  kube-system                 kindnet-hpq6g                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-no-preload-975002              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-no-preload-975002     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-lzzt4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-no-preload-975002              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qnh5d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ns2nz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 116s                   kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node no-preload-975002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node no-preload-975002 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m14s (x8 over 2m14s)  kubelet          Node no-preload-975002 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m3s                   kubelet          Node no-preload-975002 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m3s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m3s                   kubelet          Node no-preload-975002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m3s                   kubelet          Node no-preload-975002 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           119s                   node-controller  Node no-preload-975002 event: Registered Node no-preload-975002 in Controller
	  Normal   NodeReady                103s                   kubelet          Node no-preload-975002 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 66s)      kubelet          Node no-preload-975002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 66s)      kubelet          Node no-preload-975002 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 66s)      kubelet          Node no-preload-975002 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node no-preload-975002 event: Registered Node no-preload-975002 in Controller
	
	
	==> dmesg <==
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:18] overlayfs: idmapped layers are currently not supported
	[ +12.782696] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:20] overlayfs: idmapped layers are currently not supported
	[ +29.672765] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6e21bf887b8a4d294f36a9fe682b89471c2cb8e0efa90f34ee105c42e3a4ed56] <==
	{"level":"warn","ts":"2025-10-02T22:20:03.747828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:03.790404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:03.864667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:03.892965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:03.975702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.022412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.079993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.112453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.162087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.215725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.272765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.321778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.440685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.484198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.493754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.529377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.587631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.626111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.682190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.735763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.795877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.870807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.894965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.926601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:05.018583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54736","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:21:03 up  7:03,  0 user,  load average: 4.49, 3.80, 2.74
	Linux no-preload-975002 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0281f8cdfcd2031ec210a895649cebd0ffefc9c6ebb75564bbddb8613f810d4d] <==
	I1002 22:20:08.651318       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:20:08.670973       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 22:20:08.671224       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:20:08.671270       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:20:08.671305       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:20:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:20:08.910879       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:20:08.910917       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:20:08.910927       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:20:08.911257       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:20:38.911220       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 22:20:38.911322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:20:38.911807       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:20:38.911950       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 22:20:40.311369       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:20:40.311421       1 metrics.go:72] Registering metrics
	I1002 22:20:40.311513       1 controller.go:711] "Syncing nftables rules"
	I1002 22:20:48.911262       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:20:48.911430       1 main.go:301] handling current node
	I1002 22:20:58.918152       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:20:58.918257       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1a8916419f45f0470c9355a6ffd477a3655ab0372337b6555a257fc3e14a31b1] <==
	I1002 22:20:06.496238       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:20:06.555595       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:20:06.555621       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:20:06.555748       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 22:20:06.555765       1 policy_source.go:240] refreshing policies
	I1002 22:20:06.601208       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:20:06.626483       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 22:20:06.627599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:20:06.634540       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:20:06.634887       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 22:20:06.655995       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 22:20:06.656081       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 22:20:06.656927       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:20:06.659254       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 22:20:07.146575       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:20:07.321755       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:20:09.066213       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 22:20:09.313174       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:20:09.593657       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:20:09.637987       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:20:09.750321       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.122.14"}
	I1002 22:20:09.779008       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.154.69"}
	I1002 22:20:11.596080       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 22:20:11.989365       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 22:20:12.182348       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [43a71ced78bb7b029ec9c280521a8a4787f1bb401f876ef1e427eb9e5ed28915] <==
	I1002 22:20:11.591216       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 22:20:11.592479       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:20:11.592820       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 22:20:11.592832       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 22:20:11.592850       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 22:20:11.592858       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 22:20:11.592870       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 22:20:11.592884       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 22:20:11.592893       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 22:20:11.592902       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 22:20:11.596451       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 22:20:11.601226       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:20:11.601285       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:20:11.601961       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 22:20:11.604700       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:20:11.612763       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 22:20:11.625052       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:20:11.626096       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 22:20:11.627811       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 22:20:11.632398       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 22:20:11.632475       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 22:20:11.632489       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 22:20:11.632499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 22:20:11.634132       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 22:20:11.639177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [15627b8315997731057367b41a0646bd5db72708d57cc3b8fccc27f79a99dc86] <==
	I1002 22:20:10.215059       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:20:10.482236       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:20:10.591644       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:20:10.591765       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 22:20:10.591874       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:20:10.642971       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:20:10.643098       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:20:10.647371       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:20:10.647756       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:20:10.647981       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:20:10.649264       1 config.go:200] "Starting service config controller"
	I1002 22:20:10.649336       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:20:10.649379       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:20:10.649406       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:20:10.649442       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:20:10.649469       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:20:10.659826       1 config.go:309] "Starting node config controller"
	I1002 22:20:10.659909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:20:10.659940       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:20:10.750055       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:20:10.750103       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:20:10.750136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [61c7aad70fed03ac13b24399a7eaea3aa68143333b96d64b5c76e9c67929d289] <==
	I1002 22:20:01.800170       1 serving.go:386] Generated self-signed cert in-memory
	I1002 22:20:10.131003       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 22:20:10.131123       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:20:10.142220       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:20:10.142410       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 22:20:10.142474       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 22:20:10.142524       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 22:20:10.148860       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:10.148954       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:10.149001       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:20:10.149042       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:20:10.243060       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 22:20:10.249339       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:20:10.249462       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:20:12 no-preload-975002 kubelet[769]: I1002 22:20:12.349218     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6fc1079b-27fc-4890-98b7-522c46236900-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qnh5d\" (UID: \"6fc1079b-27fc-4890-98b7-522c46236900\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d"
	Oct 02 22:20:13 no-preload-975002 kubelet[769]: W1002 22:20:13.479156     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/crio-559a5c33f1fa06a8fc43da33093610591f6089809031a7705678b16f5bdbb37c WatchSource:0}: Error finding container 559a5c33f1fa06a8fc43da33093610591f6089809031a7705678b16f5bdbb37c: Status 404 returned error can't find the container with id 559a5c33f1fa06a8fc43da33093610591f6089809031a7705678b16f5bdbb37c
	Oct 02 22:20:13 no-preload-975002 kubelet[769]: W1002 22:20:13.522091     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/crio-c7cccecc27fa41cc75bca30b7fbc77d4d727aac96af1bd4368a47b89b85a37f8 WatchSource:0}: Error finding container c7cccecc27fa41cc75bca30b7fbc77d4d727aac96af1bd4368a47b89b85a37f8: Status 404 returned error can't find the container with id c7cccecc27fa41cc75bca30b7fbc77d4d727aac96af1bd4368a47b89b85a37f8
	Oct 02 22:20:16 no-preload-975002 kubelet[769]: I1002 22:20:16.150342     769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 22:20:21 no-preload-975002 kubelet[769]: I1002 22:20:21.244968     769 scope.go:117] "RemoveContainer" containerID="0e08be0bfb51be2b1ea785b426045539cdc9faca52eb4be7af32216c5347b8d9"
	Oct 02 22:20:22 no-preload-975002 kubelet[769]: I1002 22:20:22.249454     769 scope.go:117] "RemoveContainer" containerID="0e08be0bfb51be2b1ea785b426045539cdc9faca52eb4be7af32216c5347b8d9"
	Oct 02 22:20:22 no-preload-975002 kubelet[769]: I1002 22:20:22.249731     769 scope.go:117] "RemoveContainer" containerID="85b7c20e0fbfa9d85f15b10fc2c7e3783d78d37524c30c527fd2691f02cfb8e9"
	Oct 02 22:20:22 no-preload-975002 kubelet[769]: E1002 22:20:22.249875     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qnh5d_kubernetes-dashboard(6fc1079b-27fc-4890-98b7-522c46236900)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d" podUID="6fc1079b-27fc-4890-98b7-522c46236900"
	Oct 02 22:20:23 no-preload-975002 kubelet[769]: I1002 22:20:23.422602     769 scope.go:117] "RemoveContainer" containerID="85b7c20e0fbfa9d85f15b10fc2c7e3783d78d37524c30c527fd2691f02cfb8e9"
	Oct 02 22:20:23 no-preload-975002 kubelet[769]: E1002 22:20:23.422798     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qnh5d_kubernetes-dashboard(6fc1079b-27fc-4890-98b7-522c46236900)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d" podUID="6fc1079b-27fc-4890-98b7-522c46236900"
	Oct 02 22:20:36 no-preload-975002 kubelet[769]: I1002 22:20:36.968201     769 scope.go:117] "RemoveContainer" containerID="85b7c20e0fbfa9d85f15b10fc2c7e3783d78d37524c30c527fd2691f02cfb8e9"
	Oct 02 22:20:37 no-preload-975002 kubelet[769]: I1002 22:20:37.297690     769 scope.go:117] "RemoveContainer" containerID="85b7c20e0fbfa9d85f15b10fc2c7e3783d78d37524c30c527fd2691f02cfb8e9"
	Oct 02 22:20:37 no-preload-975002 kubelet[769]: I1002 22:20:37.297958     769 scope.go:117] "RemoveContainer" containerID="70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6"
	Oct 02 22:20:37 no-preload-975002 kubelet[769]: E1002 22:20:37.298224     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qnh5d_kubernetes-dashboard(6fc1079b-27fc-4890-98b7-522c46236900)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d" podUID="6fc1079b-27fc-4890-98b7-522c46236900"
	Oct 02 22:20:37 no-preload-975002 kubelet[769]: I1002 22:20:37.320331     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ns2nz" podStartSLOduration=12.202749487 podStartE2EDuration="25.320313647s" podCreationTimestamp="2025-10-02 22:20:12 +0000 UTC" firstStartedPulling="2025-10-02 22:20:13.529888083 +0000 UTC m=+15.833342738" lastFinishedPulling="2025-10-02 22:20:26.647452243 +0000 UTC m=+28.950906898" observedRunningTime="2025-10-02 22:20:27.286485256 +0000 UTC m=+29.589939919" watchObservedRunningTime="2025-10-02 22:20:37.320313647 +0000 UTC m=+39.623768310"
	Oct 02 22:20:39 no-preload-975002 kubelet[769]: I1002 22:20:39.305503     769 scope.go:117] "RemoveContainer" containerID="2e1b89716b7c8f048407a03f59e1e05cb8799115581f388a5b987cc1e38cb88c"
	Oct 02 22:20:43 no-preload-975002 kubelet[769]: I1002 22:20:43.413423     769 scope.go:117] "RemoveContainer" containerID="70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6"
	Oct 02 22:20:43 no-preload-975002 kubelet[769]: E1002 22:20:43.414242     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qnh5d_kubernetes-dashboard(6fc1079b-27fc-4890-98b7-522c46236900)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d" podUID="6fc1079b-27fc-4890-98b7-522c46236900"
	Oct 02 22:20:58 no-preload-975002 kubelet[769]: I1002 22:20:58.967737     769 scope.go:117] "RemoveContainer" containerID="70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6"
	Oct 02 22:20:59 no-preload-975002 kubelet[769]: I1002 22:20:59.368831     769 scope.go:117] "RemoveContainer" containerID="70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6"
	Oct 02 22:20:59 no-preload-975002 kubelet[769]: I1002 22:20:59.369253     769 scope.go:117] "RemoveContainer" containerID="0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a"
	Oct 02 22:20:59 no-preload-975002 kubelet[769]: E1002 22:20:59.369640     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qnh5d_kubernetes-dashboard(6fc1079b-27fc-4890-98b7-522c46236900)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d" podUID="6fc1079b-27fc-4890-98b7-522c46236900"
	Oct 02 22:21:00 no-preload-975002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:21:00 no-preload-975002 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:21:00 no-preload-975002 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [15a6e97f7aa572e9bc1e965b413f2a120067afd70f77716ef63f1f153dee8cf2] <==
	2025/10/02 22:20:26 Using namespace: kubernetes-dashboard
	2025/10/02 22:20:26 Using in-cluster config to connect to apiserver
	2025/10/02 22:20:26 Using secret token for csrf signing
	2025/10/02 22:20:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 22:20:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 22:20:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 22:20:26 Generating JWE encryption key
	2025/10/02 22:20:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 22:20:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 22:20:27 Initializing JWE encryption key from synchronized object
	2025/10/02 22:20:27 Creating in-cluster Sidecar client
	2025/10/02 22:20:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:20:27 Serving insecurely on HTTP port: 9090
	2025/10/02 22:20:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:20:26 Starting overwatch
	
	
	==> storage-provisioner [2e1b89716b7c8f048407a03f59e1e05cb8799115581f388a5b987cc1e38cb88c] <==
	I1002 22:20:08.850556       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 22:20:38.852668       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7502d4f4598c15d0639e48931410f1dfd548e8ab6d7cb9f57a61c196c9b65208] <==
	I1002 22:20:39.380552       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 22:20:39.396683       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:20:39.396737       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 22:20:39.399885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:42.854876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:47.116643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:50.714739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:53.768165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:56.790455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:56.798118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:20:56.798282       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:20:56.802363       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-975002_270cb49f-a937-4324-bf17-3f1b61381014!
	W1002 22:20:56.803475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:20:56.810510       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54f1ad02-57ac-43be-8016-6454cc1639da", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-975002_270cb49f-a937-4324-bf17-3f1b61381014 became leader
	W1002 22:20:56.817207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:20:56.902904       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-975002_270cb49f-a937-4324-bf17-3f1b61381014!
	W1002 22:20:58.821218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:58.826487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:00.829986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:00.837465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:02.842493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:02.847584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-975002 -n no-preload-975002
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-975002 -n no-preload-975002: exit status 2 (367.989351ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-975002 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-975002
helpers_test.go:243: (dbg) docker inspect no-preload-975002:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763",
	        "Created": "2025-10-02T22:18:14.898358454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1477776,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T22:19:49.975438972Z",
	            "FinishedAt": "2025-10-02T22:19:47.310226213Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/hostname",
	        "HostsPath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/hosts",
	        "LogPath": "/var/lib/docker/containers/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763-json.log",
	        "Name": "/no-preload-975002",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-975002:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-975002",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763",
	                "LowerDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9-init/diff:/var/lib/docker/overlay2/52056d3d1c4338c5d728ae30a77a6429e194f85da10d3f1a0d0eb342861d6566/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b07df3270adbc6f54b19b3450098a2d9a7249e96e5647ac718a45dba05b81d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-975002",
	                "Source": "/var/lib/docker/volumes/no-preload-975002/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-975002",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-975002",
	                "name.minikube.sigs.k8s.io": "no-preload-975002",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18b04379e76db7c09f7423cc7a3ee4bf9ac9aa6795a8e90d88baebe675e77853",
	            "SandboxKey": "/var/run/docker/netns/18b04379e76d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34591"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34592"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34595"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34593"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34594"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-975002": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:f5:c6:bb:56:aa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bdf5cacca0e9b177bb286cc49833ddf6be2feeac26e6da7eb90c632741658614",
	                    "EndpointID": "d5c6eae5878b15114a59f017c441e517de2a0d172f6798d949d461c29055b6c3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-975002",
	                        "845f3e6dfe04"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-975002 -n no-preload-975002
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-975002 -n no-preload-975002: exit status 2 (371.769352ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-975002 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-975002 logs -n 25: (2.078556586s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-607037                                                                                                                                                                                                               │ disable-driver-mounts-607037 │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ stop    │ -p embed-certs-080134 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ addons  │ enable dashboard -p embed-certs-080134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:18 UTC │
	│ start   │ -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:18 UTC │ 02 Oct 25 22:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-975002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ image   │ embed-certs-080134 image list --format=json                                                                                                                                                                                                   │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ pause   │ -p embed-certs-080134 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │                     │
	│ stop    │ -p no-preload-975002 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ delete  │ -p embed-certs-080134                                                                                                                                                                                                                         │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ delete  │ -p embed-certs-080134                                                                                                                                                                                                                         │ embed-certs-080134           │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ start   │ -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable dashboard -p no-preload-975002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:19 UTC │
	│ start   │ -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:19 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable metrics-server -p newest-cni-007061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	│ stop    │ -p newest-cni-007061 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ addons  │ enable dashboard -p newest-cni-007061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ start   │ -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ image   │ newest-cni-007061 image list --format=json                                                                                                                                                                                                    │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ pause   │ -p newest-cni-007061 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	│ delete  │ -p newest-cni-007061                                                                                                                                                                                                                          │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ delete  │ -p newest-cni-007061                                                                                                                                                                                                                          │ newest-cni-007061            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ start   │ -p auto-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-198170                  │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	│ image   │ no-preload-975002 image list --format=json                                                                                                                                                                                                    │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │ 02 Oct 25 22:20 UTC │
	│ pause   │ -p no-preload-975002 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-975002            │ jenkins │ v1.37.0 │ 02 Oct 25 22:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 22:20:59
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:20:59.618064 1485171 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:20:59.618659 1485171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:59.618695 1485171 out.go:374] Setting ErrFile to fd 2...
	I1002 22:20:59.618716 1485171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:20:59.619022 1485171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:20:59.619533 1485171 out.go:368] Setting JSON to false
	I1002 22:20:59.620612 1485171 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25385,"bootTime":1759418275,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:20:59.620707 1485171 start.go:140] virtualization:  
	I1002 22:20:59.624692 1485171 out.go:179] * [auto-198170] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:20:59.628084 1485171 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:20:59.628152 1485171 notify.go:220] Checking for updates...
	I1002 22:20:59.634466 1485171 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:20:59.639023 1485171 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:20:59.650906 1485171 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:20:59.654060 1485171 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:20:59.657053 1485171 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:20:59.660531 1485171 config.go:182] Loaded profile config "no-preload-975002": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:20:59.660637 1485171 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:20:59.709304 1485171 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:20:59.709444 1485171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:20:59.778051 1485171 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:20:59.767792263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:20:59.778586 1485171 docker.go:318] overlay module found
	I1002 22:20:59.781773 1485171 out.go:179] * Using the docker driver based on user configuration
	I1002 22:20:59.784660 1485171 start.go:304] selected driver: docker
	I1002 22:20:59.784680 1485171 start.go:924] validating driver "docker" against <nil>
	I1002 22:20:59.784694 1485171 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:20:59.785417 1485171 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:20:59.867540 1485171 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:20:59.856740996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:20:59.867695 1485171 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 22:20:59.867940 1485171 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 22:20:59.871158 1485171 out.go:179] * Using Docker driver with root privileges
	I1002 22:20:59.874137 1485171 cni.go:84] Creating CNI manager for ""
	I1002 22:20:59.874211 1485171 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:20:59.874220 1485171 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 22:20:59.874290 1485171 start.go:348] cluster config:
	{Name:auto-198170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-198170 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1002 22:20:59.877471 1485171 out.go:179] * Starting "auto-198170" primary control-plane node in "auto-198170" cluster
	I1002 22:20:59.880324 1485171 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 22:20:59.883238 1485171 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 22:20:59.886172 1485171 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:20:59.886229 1485171 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 22:20:59.886238 1485171 cache.go:58] Caching tarball of preloaded images
	I1002 22:20:59.886338 1485171 preload.go:233] Found /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:20:59.886348 1485171 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 22:20:59.886457 1485171 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/config.json ...
	I1002 22:20:59.886474 1485171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/config.json: {Name:mk6300f2ea222d58746b17b9cf7fbc4ed827bc4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:20:59.886632 1485171 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 22:20:59.919721 1485171 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 22:20:59.919751 1485171 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 22:20:59.919764 1485171 cache.go:232] Successfully downloaded all kic artifacts
	I1002 22:20:59.919791 1485171 start.go:360] acquireMachinesLock for auto-198170: {Name:mk80855349d994dcd47e5c288301f994f739c8f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:20:59.919904 1485171 start.go:364] duration metric: took 84.38µs to acquireMachinesLock for "auto-198170"
	I1002 22:20:59.919968 1485171 start.go:93] Provisioning new machine with config: &{Name:auto-198170 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-198170 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:20:59.920050 1485171 start.go:125] createHost starting for "" (driver="docker")
	I1002 22:20:59.923493 1485171 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 22:20:59.923733 1485171 start.go:159] libmachine.API.Create for "auto-198170" (driver="docker")
	I1002 22:20:59.923783 1485171 client.go:168] LocalClient.Create starting
	I1002 22:20:59.923883 1485171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/ca.pem
	I1002 22:20:59.923929 1485171 main.go:141] libmachine: Decoding PEM data...
	I1002 22:20:59.923948 1485171 main.go:141] libmachine: Parsing certificate...
	I1002 22:20:59.924006 1485171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-1270657/.minikube/certs/cert.pem
	I1002 22:20:59.924029 1485171 main.go:141] libmachine: Decoding PEM data...
	I1002 22:20:59.924048 1485171 main.go:141] libmachine: Parsing certificate...
	I1002 22:20:59.924424 1485171 cli_runner.go:164] Run: docker network inspect auto-198170 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 22:20:59.942718 1485171 cli_runner.go:211] docker network inspect auto-198170 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 22:20:59.942782 1485171 network_create.go:284] running [docker network inspect auto-198170] to gather additional debugging logs...
	I1002 22:20:59.942808 1485171 cli_runner.go:164] Run: docker network inspect auto-198170
	W1002 22:20:59.960162 1485171 cli_runner.go:211] docker network inspect auto-198170 returned with exit code 1
	I1002 22:20:59.960190 1485171 network_create.go:287] error running [docker network inspect auto-198170]: docker network inspect auto-198170: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-198170 not found
	I1002 22:20:59.960204 1485171 network_create.go:289] output of [docker network inspect auto-198170]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-198170 not found
	
	** /stderr **
	I1002 22:20:59.960312 1485171 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:20:59.982159 1485171 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bc3b296e1a81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:eb:59:11:9d:81} reservation:<nil>}
	I1002 22:20:59.982489 1485171 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4d7d491e9676 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:00:74:bd:3c:5f} reservation:<nil>}
	I1002 22:20:59.982842 1485171 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-314191adf21d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:b2:ac:91:58:2a:d7} reservation:<nil>}
	I1002 22:20:59.983101 1485171 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bdf5cacca0e9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ce:ad:01:44:7b:64} reservation:<nil>}
	I1002 22:20:59.983519 1485171 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f12f0}
	I1002 22:20:59.983536 1485171 network_create.go:124] attempt to create docker network auto-198170 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1002 22:20:59.983597 1485171 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-198170 auto-198170
	I1002 22:21:00.192393 1485171 network_create.go:108] docker network auto-198170 192.168.85.0/24 created
	I1002 22:21:00.192430 1485171 kic.go:121] calculated static IP "192.168.85.2" for the "auto-198170" container
	I1002 22:21:00.192535 1485171 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 22:21:00.275229 1485171 cli_runner.go:164] Run: docker volume create auto-198170 --label name.minikube.sigs.k8s.io=auto-198170 --label created_by.minikube.sigs.k8s.io=true
	I1002 22:21:00.324017 1485171 oci.go:103] Successfully created a docker volume auto-198170
	I1002 22:21:00.324120 1485171 cli_runner.go:164] Run: docker run --rm --name auto-198170-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-198170 --entrypoint /usr/bin/test -v auto-198170:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 22:21:00.962115 1485171 oci.go:107] Successfully prepared a docker volume auto-198170
	I1002 22:21:00.962168 1485171 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 22:21:00.962187 1485171 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 22:21:00.962266 1485171 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-198170:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.911707768Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.916055661Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.916092436Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.916114794Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.919469622Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.919502942Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.919528804Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.924221282Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.924252239Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.924268739Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.928465644Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:20:48 no-preload-975002 crio[652]: time="2025-10-02T22:20:48.928501787Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.968283426Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e5098a0f-e69b-429a-99eb-399fdfb87719 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.970090056Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=88e48e0c-385f-42b9-8610-26797ec96f45 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.972598999Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d/dashboard-metrics-scraper" id=a2f23cca-2e42-4f75-ab7f-4ac645c9c5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.972907974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.986395205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:58 no-preload-975002 crio[652]: time="2025-10-02T22:20:58.987074503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.022543712Z" level=info msg="Created container 0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d/dashboard-metrics-scraper" id=a2f23cca-2e42-4f75-ab7f-4ac645c9c5ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.027787662Z" level=info msg="Starting container: 0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a" id=83965da1-2b38-4046-97f1-e50b21d818be name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.031179535Z" level=info msg="Started container" PID=1711 containerID=0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d/dashboard-metrics-scraper id=83965da1-2b38-4046-97f1-e50b21d818be name=/runtime.v1.RuntimeService/StartContainer sandboxID=559a5c33f1fa06a8fc43da33093610591f6089809031a7705678b16f5bdbb37c
	Oct 02 22:20:59 no-preload-975002 conmon[1709]: conmon 0b355d3687872eb4c889 <ninfo>: container 1711 exited with status 1
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.373999267Z" level=info msg="Removing container: 70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6" id=7c8bcb76-7cdb-4ec3-ac18-294e46a99bab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.393832705Z" level=info msg="Error loading conmon cgroup of container 70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6: cgroup deleted" id=7c8bcb76-7cdb-4ec3-ac18-294e46a99bab name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:20:59 no-preload-975002 crio[652]: time="2025-10-02T22:20:59.397238731Z" level=info msg="Removed container 70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d/dashboard-metrics-scraper" id=7c8bcb76-7cdb-4ec3-ac18-294e46a99bab name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0b355d3687872       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago        Exited              dashboard-metrics-scraper   3                   559a5c33f1fa0       dashboard-metrics-scraper-6ffb444bf9-qnh5d   kubernetes-dashboard
	7502d4f4598c1       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           26 seconds ago       Running             storage-provisioner         2                   aaeb02b2212e1       storage-provisioner                          kube-system
	15a6e97f7aa57       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   c7cccecc27fa4       kubernetes-dashboard-855c9754f9-ns2nz        kubernetes-dashboard
	7acf5b78a8af2       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   6797afd588169       busybox                                      default
	173dda373de87       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   c43133531face       coredns-66bc5c9577-rj4bn                     kube-system
	0281f8cdfcd20       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   ea3605a35e18a       kindnet-hpq6g                                kube-system
	15627b8315997       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   009a823d4ed92       kube-proxy-lzzt4                             kube-system
	2e1b89716b7c8       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   aaeb02b2212e1       storage-provisioner                          kube-system
	1a8916419f45f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b17f106f886f8       kube-apiserver-no-preload-975002             kube-system
	43a71ced78bb7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   03f3c9a85834c       kube-controller-manager-no-preload-975002    kube-system
	6e21bf887b8a4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   f9a78f8258630       etcd-no-preload-975002                       kube-system
	61c7aad70fed0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   0a0fba5b05056       kube-scheduler-no-preload-975002             kube-system
	
	
	==> coredns [173dda373de876dc4ee0e0d27400b6b42ec182e713432709b586375f99657c3a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58867 - 44939 "HINFO IN 7357301730326203011.6127420094347801142. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017555901s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-975002
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-975002
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=193cee6aa0f134b5df421bbd88a1ddd3223481a4
	                    minikube.k8s.io/name=no-preload-975002
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T22_19_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 22:18:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-975002
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 22:20:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:18:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:18:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:18:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 22:20:48 +0000   Thu, 02 Oct 2025 22:19:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-975002
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d4821ac97e143b6a85d624f1b145104
	  System UUID:                c00f53d9-fad4-4c59-816a-d3b3d9ec8fa6
	  Boot ID:                    0a3d24d0-981d-40e8-bead-a978d9758065
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-66bc5c9577-rj4bn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m1s
	  kube-system                 etcd-no-preload-975002                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m9s
	  kube-system                 kindnet-hpq6g                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m1s
	  kube-system                 kube-apiserver-no-preload-975002              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-no-preload-975002     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-lzzt4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-scheduler-no-preload-975002              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qnh5d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ns2nz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 119s                   kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node no-preload-975002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node no-preload-975002 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m17s (x8 over 2m17s)  kubelet          Node no-preload-975002 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m6s                   kubelet          Node no-preload-975002 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m6s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m6s                   kubelet          Node no-preload-975002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m6s                   kubelet          Node no-preload-975002 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m2s                   node-controller  Node no-preload-975002 event: Registered Node no-preload-975002 in Controller
	  Normal   NodeReady                106s                   kubelet          Node no-preload-975002 status is now: NodeReady
	  Normal   Starting                 69s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 69s)      kubelet          Node no-preload-975002 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 69s)      kubelet          Node no-preload-975002 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 69s)      kubelet          Node no-preload-975002 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node no-preload-975002 event: Registered Node no-preload-975002 in Controller
	
	
	==> dmesg <==
	[  +8.999507] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:50] overlayfs: idmapped layers are currently not supported
	[ +27.394407] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:51] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:52] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:53] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:54] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:57] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:58] overlayfs: idmapped layers are currently not supported
	[Oct 2 21:59] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:01] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:02] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:04] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:11] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:13] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:14] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:15] overlayfs: idmapped layers are currently not supported
	[ +10.038221] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:16] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:17] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:18] overlayfs: idmapped layers are currently not supported
	[ +12.782696] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:19] overlayfs: idmapped layers are currently not supported
	[Oct 2 22:20] overlayfs: idmapped layers are currently not supported
	[ +29.672765] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6e21bf887b8a4d294f36a9fe682b89471c2cb8e0efa90f34ee105c42e3a4ed56] <==
	{"level":"warn","ts":"2025-10-02T22:20:03.747828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:03.790404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:03.864667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:03.892965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:03.975702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.022412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.079993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.112453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.162087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.215725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.272765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.321778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.440685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.484198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.493754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.529377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.587631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.626111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.682190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.735763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.795877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.870807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.894965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:04.926601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T22:20:05.018583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54736","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:21:06 up  7:03,  0 user,  load average: 4.37, 3.79, 2.74
	Linux no-preload-975002 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0281f8cdfcd2031ec210a895649cebd0ffefc9c6ebb75564bbddb8613f810d4d] <==
	I1002 22:20:08.651318       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 22:20:08.670973       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1002 22:20:08.671224       1 main.go:148] setting mtu 1500 for CNI 
	I1002 22:20:08.671270       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 22:20:08.671305       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T22:20:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 22:20:08.910879       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 22:20:08.910917       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 22:20:08.910927       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 22:20:08.911257       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1002 22:20:38.911220       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1002 22:20:38.911322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1002 22:20:38.911807       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1002 22:20:38.911950       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1002 22:20:40.311369       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 22:20:40.311421       1 metrics.go:72] Registering metrics
	I1002 22:20:40.311513       1 controller.go:711] "Syncing nftables rules"
	I1002 22:20:48.911262       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:20:48.911430       1 main.go:301] handling current node
	I1002 22:20:58.918152       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1002 22:20:58.918257       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1a8916419f45f0470c9355a6ffd477a3655ab0372337b6555a257fc3e14a31b1] <==
	I1002 22:20:06.496238       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 22:20:06.555595       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 22:20:06.555621       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 22:20:06.555748       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 22:20:06.555765       1 policy_source.go:240] refreshing policies
	I1002 22:20:06.601208       1 cache.go:39] Caches are synced for autoregister controller
	I1002 22:20:06.626483       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 22:20:06.627599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 22:20:06.634540       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 22:20:06.634887       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1002 22:20:06.655995       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 22:20:06.656081       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 22:20:06.656927       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 22:20:06.659254       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 22:20:07.146575       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 22:20:07.321755       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 22:20:09.066213       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 22:20:09.313174       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 22:20:09.593657       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:20:09.637987       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 22:20:09.750321       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.122.14"}
	I1002 22:20:09.779008       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.154.69"}
	I1002 22:20:11.596080       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 22:20:11.989365       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 22:20:12.182348       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [43a71ced78bb7b029ec9c280521a8a4787f1bb401f876ef1e427eb9e5ed28915] <==
	I1002 22:20:11.591216       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1002 22:20:11.592479       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1002 22:20:11.592820       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 22:20:11.592832       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 22:20:11.592850       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1002 22:20:11.592858       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 22:20:11.592870       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1002 22:20:11.592884       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 22:20:11.592893       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1002 22:20:11.592902       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1002 22:20:11.596451       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 22:20:11.601226       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 22:20:11.601285       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 22:20:11.601961       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 22:20:11.604700       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 22:20:11.612763       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1002 22:20:11.625052       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 22:20:11.626096       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1002 22:20:11.627811       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 22:20:11.632398       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 22:20:11.632475       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 22:20:11.632489       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 22:20:11.632499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 22:20:11.634132       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 22:20:11.639177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [15627b8315997731057367b41a0646bd5db72708d57cc3b8fccc27f79a99dc86] <==
	I1002 22:20:10.215059       1 server_linux.go:53] "Using iptables proxy"
	I1002 22:20:10.482236       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 22:20:10.591644       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 22:20:10.591765       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1002 22:20:10.591874       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 22:20:10.642971       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:20:10.643098       1 server_linux.go:132] "Using iptables Proxier"
	I1002 22:20:10.647371       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 22:20:10.647756       1 server.go:527] "Version info" version="v1.34.1"
	I1002 22:20:10.647981       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:20:10.649264       1 config.go:200] "Starting service config controller"
	I1002 22:20:10.649336       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 22:20:10.649379       1 config.go:106] "Starting endpoint slice config controller"
	I1002 22:20:10.649406       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 22:20:10.649442       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 22:20:10.649469       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 22:20:10.659826       1 config.go:309] "Starting node config controller"
	I1002 22:20:10.659909       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 22:20:10.659940       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 22:20:10.750055       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 22:20:10.750103       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 22:20:10.750136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [61c7aad70fed03ac13b24399a7eaea3aa68143333b96d64b5c76e9c67929d289] <==
	I1002 22:20:01.800170       1 serving.go:386] Generated self-signed cert in-memory
	I1002 22:20:10.131003       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1002 22:20:10.131123       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:20:10.142220       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 22:20:10.142410       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1002 22:20:10.142474       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1002 22:20:10.142524       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 22:20:10.148860       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:10.148954       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:20:10.149001       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:20:10.149042       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:20:10.243060       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 22:20:10.249339       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:20:10.249462       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 22:20:12 no-preload-975002 kubelet[769]: I1002 22:20:12.349218     769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6fc1079b-27fc-4890-98b7-522c46236900-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qnh5d\" (UID: \"6fc1079b-27fc-4890-98b7-522c46236900\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d"
	Oct 02 22:20:13 no-preload-975002 kubelet[769]: W1002 22:20:13.479156     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/crio-559a5c33f1fa06a8fc43da33093610591f6089809031a7705678b16f5bdbb37c WatchSource:0}: Error finding container 559a5c33f1fa06a8fc43da33093610591f6089809031a7705678b16f5bdbb37c: Status 404 returned error can't find the container with id 559a5c33f1fa06a8fc43da33093610591f6089809031a7705678b16f5bdbb37c
	Oct 02 22:20:13 no-preload-975002 kubelet[769]: W1002 22:20:13.522091     769 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/845f3e6dfe041f75a33e9dc5ebf5b4005b773bcbfefa575cfbb57a6815079763/crio-c7cccecc27fa41cc75bca30b7fbc77d4d727aac96af1bd4368a47b89b85a37f8 WatchSource:0}: Error finding container c7cccecc27fa41cc75bca30b7fbc77d4d727aac96af1bd4368a47b89b85a37f8: Status 404 returned error can't find the container with id c7cccecc27fa41cc75bca30b7fbc77d4d727aac96af1bd4368a47b89b85a37f8
	Oct 02 22:20:16 no-preload-975002 kubelet[769]: I1002 22:20:16.150342     769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 02 22:20:21 no-preload-975002 kubelet[769]: I1002 22:20:21.244968     769 scope.go:117] "RemoveContainer" containerID="0e08be0bfb51be2b1ea785b426045539cdc9faca52eb4be7af32216c5347b8d9"
	Oct 02 22:20:22 no-preload-975002 kubelet[769]: I1002 22:20:22.249454     769 scope.go:117] "RemoveContainer" containerID="0e08be0bfb51be2b1ea785b426045539cdc9faca52eb4be7af32216c5347b8d9"
	Oct 02 22:20:22 no-preload-975002 kubelet[769]: I1002 22:20:22.249731     769 scope.go:117] "RemoveContainer" containerID="85b7c20e0fbfa9d85f15b10fc2c7e3783d78d37524c30c527fd2691f02cfb8e9"
	Oct 02 22:20:22 no-preload-975002 kubelet[769]: E1002 22:20:22.249875     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qnh5d_kubernetes-dashboard(6fc1079b-27fc-4890-98b7-522c46236900)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d" podUID="6fc1079b-27fc-4890-98b7-522c46236900"
	Oct 02 22:20:23 no-preload-975002 kubelet[769]: I1002 22:20:23.422602     769 scope.go:117] "RemoveContainer" containerID="85b7c20e0fbfa9d85f15b10fc2c7e3783d78d37524c30c527fd2691f02cfb8e9"
	Oct 02 22:20:23 no-preload-975002 kubelet[769]: E1002 22:20:23.422798     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qnh5d_kubernetes-dashboard(6fc1079b-27fc-4890-98b7-522c46236900)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d" podUID="6fc1079b-27fc-4890-98b7-522c46236900"
	Oct 02 22:20:36 no-preload-975002 kubelet[769]: I1002 22:20:36.968201     769 scope.go:117] "RemoveContainer" containerID="85b7c20e0fbfa9d85f15b10fc2c7e3783d78d37524c30c527fd2691f02cfb8e9"
	Oct 02 22:20:37 no-preload-975002 kubelet[769]: I1002 22:20:37.297690     769 scope.go:117] "RemoveContainer" containerID="85b7c20e0fbfa9d85f15b10fc2c7e3783d78d37524c30c527fd2691f02cfb8e9"
	Oct 02 22:20:37 no-preload-975002 kubelet[769]: I1002 22:20:37.297958     769 scope.go:117] "RemoveContainer" containerID="70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6"
	Oct 02 22:20:37 no-preload-975002 kubelet[769]: E1002 22:20:37.298224     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qnh5d_kubernetes-dashboard(6fc1079b-27fc-4890-98b7-522c46236900)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d" podUID="6fc1079b-27fc-4890-98b7-522c46236900"
	Oct 02 22:20:37 no-preload-975002 kubelet[769]: I1002 22:20:37.320331     769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ns2nz" podStartSLOduration=12.202749487 podStartE2EDuration="25.320313647s" podCreationTimestamp="2025-10-02 22:20:12 +0000 UTC" firstStartedPulling="2025-10-02 22:20:13.529888083 +0000 UTC m=+15.833342738" lastFinishedPulling="2025-10-02 22:20:26.647452243 +0000 UTC m=+28.950906898" observedRunningTime="2025-10-02 22:20:27.286485256 +0000 UTC m=+29.589939919" watchObservedRunningTime="2025-10-02 22:20:37.320313647 +0000 UTC m=+39.623768310"
	Oct 02 22:20:39 no-preload-975002 kubelet[769]: I1002 22:20:39.305503     769 scope.go:117] "RemoveContainer" containerID="2e1b89716b7c8f048407a03f59e1e05cb8799115581f388a5b987cc1e38cb88c"
	Oct 02 22:20:43 no-preload-975002 kubelet[769]: I1002 22:20:43.413423     769 scope.go:117] "RemoveContainer" containerID="70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6"
	Oct 02 22:20:43 no-preload-975002 kubelet[769]: E1002 22:20:43.414242     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qnh5d_kubernetes-dashboard(6fc1079b-27fc-4890-98b7-522c46236900)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d" podUID="6fc1079b-27fc-4890-98b7-522c46236900"
	Oct 02 22:20:58 no-preload-975002 kubelet[769]: I1002 22:20:58.967737     769 scope.go:117] "RemoveContainer" containerID="70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6"
	Oct 02 22:20:59 no-preload-975002 kubelet[769]: I1002 22:20:59.368831     769 scope.go:117] "RemoveContainer" containerID="70b216fbe6792dd0a21128977926bc7304291c7d916acd309b2ead6e4c63d5c6"
	Oct 02 22:20:59 no-preload-975002 kubelet[769]: I1002 22:20:59.369253     769 scope.go:117] "RemoveContainer" containerID="0b355d3687872eb4c8898c67ca431bcaba4e2f242878120169f4b4bf2ed6d85a"
	Oct 02 22:20:59 no-preload-975002 kubelet[769]: E1002 22:20:59.369640     769 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qnh5d_kubernetes-dashboard(6fc1079b-27fc-4890-98b7-522c46236900)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qnh5d" podUID="6fc1079b-27fc-4890-98b7-522c46236900"
	Oct 02 22:21:00 no-preload-975002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 02 22:21:00 no-preload-975002 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 02 22:21:00 no-preload-975002 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [15a6e97f7aa572e9bc1e965b413f2a120067afd70f77716ef63f1f153dee8cf2] <==
	2025/10/02 22:20:26 Using namespace: kubernetes-dashboard
	2025/10/02 22:20:26 Using in-cluster config to connect to apiserver
	2025/10/02 22:20:26 Using secret token for csrf signing
	2025/10/02 22:20:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/02 22:20:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/02 22:20:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/02 22:20:26 Generating JWE encryption key
	2025/10/02 22:20:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/02 22:20:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/02 22:20:27 Initializing JWE encryption key from synchronized object
	2025/10/02 22:20:27 Creating in-cluster Sidecar client
	2025/10/02 22:20:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:20:27 Serving insecurely on HTTP port: 9090
	2025/10/02 22:20:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/02 22:20:26 Starting overwatch
	
	
	==> storage-provisioner [2e1b89716b7c8f048407a03f59e1e05cb8799115581f388a5b987cc1e38cb88c] <==
	I1002 22:20:08.850556       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1002 22:20:38.852668       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7502d4f4598c15d0639e48931410f1dfd548e8ab6d7cb9f57a61c196c9b65208] <==
	I1002 22:20:39.396683       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 22:20:39.396737       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1002 22:20:39.399885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:42.854876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:47.116643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:50.714739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:53.768165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:56.790455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:56.798118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:20:56.798282       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 22:20:56.802363       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-975002_270cb49f-a937-4324-bf17-3f1b61381014!
	W1002 22:20:56.803475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:20:56.810510       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"54f1ad02-57ac-43be-8016-6454cc1639da", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-975002_270cb49f-a937-4324-bf17-3f1b61381014 became leader
	W1002 22:20:56.817207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1002 22:20:56.902904       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-975002_270cb49f-a937-4324-bf17-3f1b61381014!
	W1002 22:20:58.821218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:20:58.826487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:00.829986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:00.837465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:02.842493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:02.847584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:04.851103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:04.866385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:06.898296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 22:21:06.907452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-975002 -n no-preload-975002
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-975002 -n no-preload-975002: exit status 2 (445.49134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-975002 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1002 22:21:07.861240 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.09s)
E1002 22:27:07.531469 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:34.070873 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:34.077328 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:34.089697 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:34.111715 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:34.153211 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:34.234664 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:34.396707 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:34.718641 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:35.359995 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:36.641404 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:38.858287 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:38.864710 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:38.876149 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:38.897572 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:38.939101 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:39.020709 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:39.182261 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:39.203621 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:39.504266 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:40.146387 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:41.427830 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:43.989776 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:44.325651 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:49.111319 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:54.568040 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:27:59.352690 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/kindnet-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (258/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 14.42
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 21.59
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 180.36
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.88
48 TestAddons/StoppedEnableDisable 12.22
49 TestCertOptions 38.27
50 TestCertExpiration 244.47
59 TestErrorSpam/setup 35.24
60 TestErrorSpam/start 0.79
61 TestErrorSpam/status 1.15
62 TestErrorSpam/pause 6.51
63 TestErrorSpam/unpause 6.04
64 TestErrorSpam/stop 1.45
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 81.8
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 24.24
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.59
76 TestFunctional/serial/CacheCmd/cache/add_local 1.1
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.08
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
81 TestFunctional/serial/CacheCmd/cache/delete 0.13
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 32.2
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.49
87 TestFunctional/serial/LogsFileCmd 1.46
88 TestFunctional/serial/InvalidService 3.91
90 TestFunctional/parallel/ConfigCmd 0.48
91 TestFunctional/parallel/DashboardCmd 13.62
92 TestFunctional/parallel/DryRun 0.43
93 TestFunctional/parallel/InternationalLanguage 0.2
94 TestFunctional/parallel/StatusCmd 1.02
99 TestFunctional/parallel/AddonsCmd 0.25
100 TestFunctional/parallel/PersistentVolumeClaim 26.92
102 TestFunctional/parallel/SSHCmd 0.75
103 TestFunctional/parallel/CpCmd 2.45
105 TestFunctional/parallel/FileSync 0.34
106 TestFunctional/parallel/CertSync 2.08
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
114 TestFunctional/parallel/License 0.36
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.46
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
128 TestFunctional/parallel/ProfileCmd/profile_list 0.41
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
130 TestFunctional/parallel/MountCmd/any-port 7.88
131 TestFunctional/parallel/MountCmd/specific-port 2.01
132 TestFunctional/parallel/MountCmd/VerifyCleanup 2.28
133 TestFunctional/parallel/ServiceCmd/List 0.58
134 TestFunctional/parallel/ServiceCmd/JSONOutput 1.43
138 TestFunctional/parallel/Version/short 0.07
139 TestFunctional/parallel/Version/components 1.34
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.92
145 TestFunctional/parallel/ImageCommands/Setup 0.63
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 204.49
164 TestMultiControlPlane/serial/DeployApp 8.31
165 TestMultiControlPlane/serial/PingHostFromPods 1.45
166 TestMultiControlPlane/serial/AddWorkerNode 61.09
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
169 TestMultiControlPlane/serial/CopyFile 19.85
170 TestMultiControlPlane/serial/StopSecondaryNode 12.7
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
172 TestMultiControlPlane/serial/RestartSecondaryNode 20.77
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 115.99
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.81
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
177 TestMultiControlPlane/serial/StopCluster 35.64
178 TestMultiControlPlane/serial/RestartCluster 72.63
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
180 TestMultiControlPlane/serial/AddSecondaryNode 82.21
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 79.06
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.71
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 43.09
211 TestKicCustomNetwork/use_default_bridge_network 35.48
212 TestKicExistingNetwork 36.72
213 TestKicCustomSubnet 38.99
214 TestKicStaticIP 38.35
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 74.15
219 TestMountStart/serial/StartWithMountFirst 9.33
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.79
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.64
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.2
226 TestMountStart/serial/RestartStopped 7.65
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 139.94
231 TestMultiNode/serial/DeployApp2Nodes 4.91
232 TestMultiNode/serial/PingHostFrom2Pods 0.99
233 TestMultiNode/serial/AddNode 59.09
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.09
237 TestMultiNode/serial/StopNode 2.26
238 TestMultiNode/serial/StartAfterStop 7.88
239 TestMultiNode/serial/RestartKeepsNodes 73.39
240 TestMultiNode/serial/DeleteNode 5.48
241 TestMultiNode/serial/StopMultiNode 23.76
242 TestMultiNode/serial/RestartMultiNode 57.83
243 TestMultiNode/serial/ValidateNameConflict 37.21
248 TestPreload 155.65
250 TestScheduledStopUnix 108.28
253 TestInsufficientStorage 13.15
254 TestRunningBinaryUpgrade 53.63
256 TestKubernetesUpgrade 358.08
257 TestMissingContainerUpgrade 110
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 55.99
261 TestNoKubernetes/serial/StartWithStopK8s 38.49
262 TestNoKubernetes/serial/Start 9.77
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
264 TestNoKubernetes/serial/ProfileList 1.5
265 TestNoKubernetes/serial/Stop 1.29
266 TestNoKubernetes/serial/StartNoArgs 9.75
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.47
268 TestStoppedBinaryUpgrade/Setup 1.45
269 TestStoppedBinaryUpgrade/Upgrade 65.42
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.22
279 TestPause/serial/Start 82.49
280 TestPause/serial/SecondStartNoReconfiguration 17.45
289 TestNetworkPlugins/group/false 3.61
294 TestStartStop/group/old-k8s-version/serial/FirstStart 62.11
295 TestStartStop/group/old-k8s-version/serial/DeployApp 8.49
297 TestStartStop/group/old-k8s-version/serial/Stop 11.93
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.96
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
301 TestStartStop/group/old-k8s-version/serial/SecondStart 60.37
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.51
308 TestStartStop/group/embed-certs/serial/FirstStart 89.02
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.98
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.85
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
315 TestStartStop/group/embed-certs/serial/DeployApp 9.33
316 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
320 TestStartStop/group/no-preload/serial/FirstStart 69.97
321 TestStartStop/group/embed-certs/serial/Stop 13
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
323 TestStartStop/group/embed-certs/serial/SecondStart 55.58
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/no-preload/serial/DeployApp 9.35
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
330 TestStartStop/group/no-preload/serial/Stop 13.53
332 TestStartStop/group/newest-cni/serial/FirstStart 47.28
333 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
334 TestStartStop/group/no-preload/serial/SecondStart 59
335 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/Stop 1.35
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/newest-cni/serial/SecondStart 16.2
340 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
345 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
346 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
347 TestNetworkPlugins/group/auto/Start 93.97
349 TestNetworkPlugins/group/kindnet/Start 87.91
350 TestNetworkPlugins/group/auto/KubeletFlags 0.29
351 TestNetworkPlugins/group/auto/NetCatPod 10.29
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
353 TestNetworkPlugins/group/auto/DNS 0.16
354 TestNetworkPlugins/group/auto/Localhost 0.14
355 TestNetworkPlugins/group/auto/HairPin 0.13
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
357 TestNetworkPlugins/group/kindnet/NetCatPod 9.3
358 TestNetworkPlugins/group/kindnet/DNS 0.28
359 TestNetworkPlugins/group/kindnet/Localhost 0.23
360 TestNetworkPlugins/group/kindnet/HairPin 0.15
361 TestNetworkPlugins/group/calico/Start 66.72
362 TestNetworkPlugins/group/custom-flannel/Start 72.51
363 TestNetworkPlugins/group/calico/ControllerPod 6.02
364 TestNetworkPlugins/group/calico/KubeletFlags 0.33
365 TestNetworkPlugins/group/calico/NetCatPod 11.28
366 TestNetworkPlugins/group/calico/DNS 0.2
367 TestNetworkPlugins/group/calico/Localhost 0.13
368 TestNetworkPlugins/group/calico/HairPin 0.14
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
371 TestNetworkPlugins/group/custom-flannel/DNS 0.2
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
374 TestNetworkPlugins/group/enable-default-cni/Start 89.54
375 TestNetworkPlugins/group/flannel/Start 63.37
376 TestNetworkPlugins/group/flannel/ControllerPod 6
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
378 TestNetworkPlugins/group/flannel/NetCatPod 11.25
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
381 TestNetworkPlugins/group/flannel/DNS 0.16
382 TestNetworkPlugins/group/flannel/Localhost 0.14
383 TestNetworkPlugins/group/flannel/HairPin 0.14
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
387 TestNetworkPlugins/group/bridge/Start 73
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 10.29
390 TestNetworkPlugins/group/bridge/DNS 0.17
391 TestNetworkPlugins/group/bridge/Localhost 0.13
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (14.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-058204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-058204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.416445282s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (14.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 21:05:49.208961 1272514 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1002 21:05:49.209049 1272514 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-058204
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-058204: exit status 85 (91.547266ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-058204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-058204 │ jenkins │ v1.37.0 │ 02 Oct 25 21:05 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:05:34
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:05:34.838546 1272519 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:05:34.838877 1272519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:05:34.838892 1272519 out.go:374] Setting ErrFile to fd 2...
	I1002 21:05:34.838899 1272519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:05:34.839182 1272519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	W1002 21:05:34.839333 1272519 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21682-1270657/.minikube/config/config.json: open /home/jenkins/minikube-integration/21682-1270657/.minikube/config/config.json: no such file or directory
	I1002 21:05:34.839749 1272519 out.go:368] Setting JSON to true
	I1002 21:05:34.840623 1272519 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20860,"bootTime":1759418275,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 21:05:34.840694 1272519 start.go:140] virtualization:  
	I1002 21:05:34.844650 1272519 out.go:99] [download-only-058204] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1002 21:05:34.844891 1272519 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 21:05:34.844941 1272519 notify.go:220] Checking for updates...
	I1002 21:05:34.847739 1272519 out.go:171] MINIKUBE_LOCATION=21682
	I1002 21:05:34.850643 1272519 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:05:34.853438 1272519 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 21:05:34.856432 1272519 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 21:05:34.859395 1272519 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 21:05:34.865019 1272519 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 21:05:34.865279 1272519 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:05:34.892163 1272519 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:05:34.892280 1272519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:05:34.949973 1272519 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-02 21:05:34.941115534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:05:34.950106 1272519 docker.go:318] overlay module found
	I1002 21:05:34.953118 1272519 out.go:99] Using the docker driver based on user configuration
	I1002 21:05:34.953162 1272519 start.go:304] selected driver: docker
	I1002 21:05:34.953169 1272519 start.go:924] validating driver "docker" against <nil>
	I1002 21:05:34.953270 1272519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:05:35.005472 1272519 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-02 21:05:34.994199759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:05:35.005660 1272519 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:05:35.005967 1272519 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 21:05:35.006176 1272519 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 21:05:35.011262 1272519 out.go:171] Using Docker driver with root privileges
	I1002 21:05:35.014417 1272519 cni.go:84] Creating CNI manager for ""
	I1002 21:05:35.014513 1272519 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:05:35.014527 1272519 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:05:35.014617 1272519 start.go:348] cluster config:
	{Name:download-only-058204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-058204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:05:35.017772 1272519 out.go:99] Starting "download-only-058204" primary control-plane node in "download-only-058204" cluster
	I1002 21:05:35.017833 1272519 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:05:35.020953 1272519 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:05:35.021036 1272519 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 21:05:35.021136 1272519 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:05:35.039244 1272519 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 21:05:35.040370 1272519 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 21:05:35.040502 1272519 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 21:05:35.080891 1272519 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1002 21:05:35.080936 1272519 cache.go:58] Caching tarball of preloaded images
	I1002 21:05:35.081178 1272519 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 21:05:35.084545 1272519 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 21:05:35.084579 1272519 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1002 21:05:35.181544 1272519 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1002 21:05:35.181715 1272519 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1002 21:05:43.710504 1272519 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	
	
	* The control-plane node download-only-058204 host does not exist
	  To start a cluster, run: "minikube start -p download-only-058204"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-058204
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (21.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-638488 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-638488 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (21.587965702s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (21.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 21:06:11.241945 1272514 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 21:06:11.241984 1272514 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-638488
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-638488: exit status 85 (83.485241ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-058204 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-058204 │ jenkins │ v1.37.0 │ 02 Oct 25 21:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 21:05 UTC │ 02 Oct 25 21:05 UTC │
	│ delete  │ -p download-only-058204                                                                                                                                                   │ download-only-058204 │ jenkins │ v1.37.0 │ 02 Oct 25 21:05 UTC │ 02 Oct 25 21:05 UTC │
	│ start   │ -o=json --download-only -p download-only-638488 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-638488 │ jenkins │ v1.37.0 │ 02 Oct 25 21:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:05:49
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:05:49.699327 1272711 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:05:49.699499 1272711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:05:49.699520 1272711 out.go:374] Setting ErrFile to fd 2...
	I1002 21:05:49.699525 1272711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:05:49.699777 1272711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:05:49.700184 1272711 out.go:368] Setting JSON to true
	I1002 21:05:49.700996 1272711 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20875,"bootTime":1759418275,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 21:05:49.701066 1272711 start.go:140] virtualization:  
	I1002 21:05:49.704393 1272711 out.go:99] [download-only-638488] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:05:49.704615 1272711 notify.go:220] Checking for updates...
	I1002 21:05:49.707577 1272711 out.go:171] MINIKUBE_LOCATION=21682
	I1002 21:05:49.710582 1272711 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:05:49.713448 1272711 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 21:05:49.716264 1272711 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 21:05:49.719203 1272711 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 21:05:49.724951 1272711 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 21:05:49.725277 1272711 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:05:49.758068 1272711 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:05:49.758210 1272711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:05:49.821906 1272711 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 21:05:49.812668254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:05:49.822017 1272711 docker.go:318] overlay module found
	I1002 21:05:49.825075 1272711 out.go:99] Using the docker driver based on user configuration
	I1002 21:05:49.825111 1272711 start.go:304] selected driver: docker
	I1002 21:05:49.825122 1272711 start.go:924] validating driver "docker" against <nil>
	I1002 21:05:49.825236 1272711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:05:49.883991 1272711 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 21:05:49.875138413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:05:49.884142 1272711 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:05:49.884405 1272711 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 21:05:49.884555 1272711 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 21:05:49.887635 1272711 out.go:171] Using Docker driver with root privileges
	I1002 21:05:49.890521 1272711 cni.go:84] Creating CNI manager for ""
	I1002 21:05:49.890592 1272711 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:05:49.890605 1272711 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:05:49.890694 1272711 start.go:348] cluster config:
	{Name:download-only-638488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-638488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:05:49.893811 1272711 out.go:99] Starting "download-only-638488" primary control-plane node in "download-only-638488" cluster
	I1002 21:05:49.893837 1272711 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:05:49.896732 1272711 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:05:49.896773 1272711 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:05:49.896886 1272711 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:05:49.912839 1272711 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 21:05:49.912980 1272711 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 21:05:49.913013 1272711 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 21:05:49.913024 1272711 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 21:05:49.913035 1272711 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 21:05:49.954311 1272711 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1002 21:05:49.954339 1272711 cache.go:58] Caching tarball of preloaded images
	I1002 21:05:49.954511 1272711 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:05:49.957649 1272711 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1002 21:05:49.957683 1272711 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1002 21:05:50.056213 1272711 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1002 21:05:50.056269 1272711 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21682-1270657/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-638488 host does not exist
	  To start a cluster, run: "minikube start -p download-only-638488"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-638488
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 21:06:12.392784 1272514 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-339125 --alsologtostderr --binary-mirror http://127.0.0.1:33819 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-339125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-339125
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-806706
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-806706: exit status 85 (78.229376ms)

                                                
                                                
-- stdout --
	* Profile "addons-806706" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-806706"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-806706
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-806706: exit status 85 (84.296818ms)

                                                
                                                
-- stdout --
	* Profile "addons-806706" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-806706"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (180.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-806706 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-806706 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m0.358942152s)
--- PASS: TestAddons/Setup (180.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-806706 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-806706 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.88s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-806706 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-806706 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [35fa3475-6217-451a-b003-6bd3ded70eb2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [35fa3475-6217-451a-b003-6bd3ded70eb2] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.00328426s
addons_test.go:694: (dbg) Run:  kubectl --context addons-806706 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-806706 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-806706 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-806706 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-806706
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-806706: (11.92179752s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-806706
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-806706
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-806706
--- PASS: TestAddons/StoppedEnableDisable (12.22s)

                                                
                                    
x
+
TestCertOptions (38.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-280401 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-280401 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.551972856s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-280401 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-280401 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-280401 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-280401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-280401
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-280401: (1.991087261s)
--- PASS: TestCertOptions (38.27s)

                                                
                                    
x
+
TestCertExpiration (244.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-247949 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1002 22:11:25.853770 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-247949 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (33.301855915s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-247949 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-247949 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (28.714438262s)
helpers_test.go:175: Cleaning up "cert-expiration-247949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-247949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-247949: (2.452189406s)
--- PASS: TestCertExpiration (244.47s)

                                                
                                    
x
+
TestErrorSpam/setup (35.24s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-563038 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-563038 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-563038 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-563038 --driver=docker  --container-runtime=crio: (35.23919438s)
--- PASS: TestErrorSpam/setup (35.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (6.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 pause: exit status 80 (2.5179679s)

                                                
                                                
-- stdout --
	* Pausing node nospam-563038 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:13:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 pause: exit status 80 (2.153501521s)

                                                
                                                
-- stdout --
	* Pausing node nospam-563038 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:13:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 pause: exit status 80 (1.836385067s)

                                                
                                                
-- stdout --
	* Pausing node nospam-563038 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:13:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.04s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 unpause: exit status 80 (1.93254991s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-563038 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:13:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 unpause: exit status 80 (1.874766413s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-563038 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:13:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 unpause: exit status 80 (2.232915892s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-563038 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-02T21:13:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.04s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 stop: (1.248229387s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-563038 --log_dir /tmp/nospam-563038 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21682-1270657/.minikube/files/etc/test/nested/copy/1272514/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-758263 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1002 21:14:14.593341 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:14.599789 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:14.611180 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:14.632550 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:14.673920 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:14.755345 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:14.916881 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:15.238551 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:15.880580 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:17.161910 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:19.723177 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:24.844495 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:35.085945 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:55.567553 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-758263 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m21.80065985s)
--- PASS: TestFunctional/serial/StartWithProxy (81.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (24.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 21:15:13.003622 1272514 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-758263 --alsologtostderr -v=8
E1002 21:15:36.528838 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-758263 --alsologtostderr -v=8: (24.238474679s)
functional_test.go:678: soft start took 24.243508953s for "functional-758263" cluster.
I1002 21:15:37.242424 1272514 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (24.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-758263 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-758263 cache add registry.k8s.io/pause:3.1: (1.249506387s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-758263 cache add registry.k8s.io/pause:3.3: (1.193115722s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-758263 cache add registry.k8s.io/pause:latest: (1.149004787s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-758263 /tmp/TestFunctionalserialCacheCmdcacheadd_local3558769042/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 cache add minikube-local-cache-test:functional-758263
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 cache delete minikube-local-cache-test:functional-758263
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-758263
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.648517ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 kubectl -- --context functional-758263 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-758263 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-758263 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-758263 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.201677915s)
functional_test.go:776: restart took 32.201788607s for "functional-758263" cluster.
I1002 21:16:16.992877 1272514 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (32.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-758263 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-758263 logs: (1.491863178s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 logs --file /tmp/TestFunctionalserialLogsFileCmd970984221/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-758263 logs --file /tmp/TestFunctionalserialLogsFileCmd970984221/001/logs.txt: (1.456547325s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.91s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-758263 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-758263
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-758263: exit status 115 (385.794028ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31991 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-758263 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 config get cpus: exit status 14 (107.445454ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 config get cpus: exit status 14 (84.750035ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-758263 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-758263 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1298656: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-758263 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-758263 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (199.416106ms)

                                                
                                                
-- stdout --
	* [functional-758263] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:26:53.024677 1298195 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:26:53.024798 1298195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:26:53.024804 1298195 out.go:374] Setting ErrFile to fd 2...
	I1002 21:26:53.024808 1298195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:26:53.025101 1298195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:26:53.025475 1298195 out.go:368] Setting JSON to false
	I1002 21:26:53.026403 1298195 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22138,"bootTime":1759418275,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 21:26:53.026485 1298195 start.go:140] virtualization:  
	I1002 21:26:53.029865 1298195 out.go:179] * [functional-758263] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 21:26:53.032861 1298195 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:26:53.032918 1298195 notify.go:220] Checking for updates...
	I1002 21:26:53.040986 1298195 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:26:53.043952 1298195 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 21:26:53.046955 1298195 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 21:26:53.049836 1298195 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:26:53.052754 1298195 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:26:53.056190 1298195 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:26:53.056769 1298195 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:26:53.094244 1298195 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:26:53.094386 1298195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:26:53.152994 1298195 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:26:53.143736891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:26:53.153107 1298195 docker.go:318] overlay module found
	I1002 21:26:53.156118 1298195 out.go:179] * Using the docker driver based on existing profile
	I1002 21:26:53.159112 1298195 start.go:304] selected driver: docker
	I1002 21:26:53.159134 1298195 start.go:924] validating driver "docker" against &{Name:functional-758263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-758263 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:26:53.159240 1298195 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:26:53.162703 1298195 out.go:203] 
	W1002 21:26:53.165579 1298195 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 21:26:53.168443 1298195 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-758263 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-758263 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-758263 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (204.395015ms)

                                                
                                                
-- stdout --
	* [functional-758263] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:26:52.827980 1298149 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:26:52.828302 1298149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:26:52.828336 1298149 out.go:374] Setting ErrFile to fd 2...
	I1002 21:26:52.828363 1298149 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:26:52.830231 1298149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:26:52.830820 1298149 out.go:368] Setting JSON to false
	I1002 21:26:52.831990 1298149 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22138,"bootTime":1759418275,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 21:26:52.832141 1298149 start.go:140] virtualization:  
	I1002 21:26:52.836079 1298149 out.go:179] * [functional-758263] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 21:26:52.839857 1298149 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:26:52.839998 1298149 notify.go:220] Checking for updates...
	I1002 21:26:52.845913 1298149 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:26:52.848764 1298149 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 21:26:52.852674 1298149 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 21:26:52.855513 1298149 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:26:52.858507 1298149 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:26:52.861999 1298149 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:26:52.862673 1298149 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:26:52.887238 1298149 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 21:26:52.887342 1298149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:26:52.953453 1298149 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 21:26:52.943579884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:26:52.953557 1298149 docker.go:318] overlay module found
	I1002 21:26:52.956672 1298149 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 21:26:52.959467 1298149 start.go:304] selected driver: docker
	I1002 21:26:52.959502 1298149 start.go:924] validating driver "docker" against &{Name:functional-758263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-758263 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:26:52.959600 1298149 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:26:52.963184 1298149 out.go:203] 
	W1002 21:26:52.966113 1298149 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 21:26:52.969042 1298149 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f7dadba3-b176-4cdf-bcbd-f3ed05a2b0d3] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003145712s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-758263 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-758263 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-758263 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-758263 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [321d94c1-a9f4-43ef-83a3-7ed25e38e665] Pending
helpers_test.go:352: "sp-pod" [321d94c1-a9f4-43ef-83a3-7ed25e38e665] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [321d94c1-a9f4-43ef-83a3-7ed25e38e665] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.006874776s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-758263 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-758263 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-758263 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6b7a183d-8cb2-4141-9938-a4bb289b541f] Pending
helpers_test.go:352: "sp-pod" [6b7a183d-8cb2-4141-9938-a4bb289b541f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00355913s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-758263 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh -n functional-758263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 cp functional-758263:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4260570615/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh -n functional-758263 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh -n functional-758263 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1272514/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo cat /etc/test/nested/copy/1272514/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1272514.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo cat /etc/ssl/certs/1272514.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1272514.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo cat /usr/share/ca-certificates/1272514.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/12725142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo cat /etc/ssl/certs/12725142.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/12725142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo cat /usr/share/ca-certificates/12725142.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-758263 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 ssh "sudo systemctl is-active docker": exit status 1 (356.975456ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 ssh "sudo systemctl is-active containerd": exit status 1 (376.630361ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-758263 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-758263 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-758263 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-758263 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1294665: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-758263 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-758263 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [df6bc074-5e36-4d9b-bd0e-02ad6bbab4dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [df6bc074-5e36-4d9b-bd0e-02ad6bbab4dd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003524284s
I1002 21:16:34.308479 1272514 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-758263 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.168.13 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-758263 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "355.066454ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.43611ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "355.94222ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "59.774062ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdany-port2009901147/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759440399586574513" to /tmp/TestFunctionalparallelMountCmdany-port2009901147/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759440399586574513" to /tmp/TestFunctionalparallelMountCmdany-port2009901147/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759440399586574513" to /tmp/TestFunctionalparallelMountCmdany-port2009901147/001/test-1759440399586574513
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (339.688029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 21:26:39.927348 1272514 retry.go:31] will retry after 535.675885ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 21:26 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 21:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 21:26 test-1759440399586574513
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh cat /mount-9p/test-1759440399586574513
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-758263 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c2b35314-cb91-4e88-8b1a-5b1fc301646e] Pending
helpers_test.go:352: "busybox-mount" [c2b35314-cb91-4e88-8b1a-5b1fc301646e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c2b35314-cb91-4e88-8b1a-5b1fc301646e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c2b35314-cb91-4e88-8b1a-5b1fc301646e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003155734s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-758263 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdany-port2009901147/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdspecific-port2676022350/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.355026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 21:26:47.789213 1272514 retry.go:31] will retry after 637.961147ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdspecific-port2676022350/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 ssh "sudo umount -f /mount-9p": exit status 1 (278.012613ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-758263 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdspecific-port2676022350/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3193312377/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3193312377/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3193312377/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T" /mount1: exit status 1 (692.950899ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 21:26:50.169197 1272514 retry.go:31] will retry after 681.539196ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-758263 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3193312377/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3193312377/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-758263 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3193312377/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-758263 service list -o json: (1.425060999s)
functional_test.go:1504: Took "1.42514675s" to run "out/minikube-linux-arm64 -p functional-758263 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-758263 version -o=json --components: (1.337730088s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-758263 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-758263 image ls --format short --alsologtostderr:
I1002 21:27:09.545952 1300884 out.go:360] Setting OutFile to fd 1 ...
I1002 21:27:09.546193 1300884 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:27:09.546223 1300884 out.go:374] Setting ErrFile to fd 2...
I1002 21:27:09.546242 1300884 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:27:09.546530 1300884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
I1002 21:27:09.547292 1300884 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:27:09.547854 1300884 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:27:09.548863 1300884 cli_runner.go:164] Run: docker container inspect functional-758263 --format={{.State.Status}}
I1002 21:27:09.587199 1300884 ssh_runner.go:195] Run: systemctl --version
I1002 21:27:09.587254 1300884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
I1002 21:27:09.620227 1300884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
I1002 21:27:09.724629 1300884 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-758263 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 0777d15d89ece │ 202MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-758263 image ls --format table --alsologtostderr:
I1002 21:27:10.421983 1301139 out.go:360] Setting OutFile to fd 1 ...
I1002 21:27:10.422235 1301139 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:27:10.422261 1301139 out.go:374] Setting ErrFile to fd 2...
I1002 21:27:10.422290 1301139 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:27:10.422595 1301139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
I1002 21:27:10.423279 1301139 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:27:10.423423 1301139 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:27:10.423902 1301139 cli_runner.go:164] Run: docker container inspect functional-758263 --format={{.State.Status}}
I1002 21:27:10.443355 1301139 ssh_runner.go:195] Run: systemctl --version
I1002 21:27:10.443408 1301139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
I1002 21:27:10.461182 1301139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
I1002 21:27:10.556572 1301139 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-758263 image ls --format json --alsologtostderr:
[{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c30258
3adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b","repoDigests":["docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc","docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992"],"repoTags":["docker.io/library/nginx:latest"],"size":"202036629"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/p
ause:3.3"],"size":"487479"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"35f3cbee4fb77c3efb39f2723a21ce1819
06139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repo
Digests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d8
7c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-758263 image ls --format json --alsologtostderr:
I1002 21:27:10.163275 1301056 out.go:360] Setting OutFile to fd 1 ...
I1002 21:27:10.163861 1301056 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:27:10.163891 1301056 out.go:374] Setting ErrFile to fd 2...
I1002 21:27:10.163909 1301056 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:27:10.164204 1301056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
I1002 21:27:10.164887 1301056 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:27:10.165221 1301056 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:27:10.166196 1301056 cli_runner.go:164] Run: docker container inspect functional-758263 --format={{.State.Status}}
I1002 21:27:10.188119 1301056 ssh_runner.go:195] Run: systemctl --version
I1002 21:27:10.188170 1301056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
I1002 21:27:10.207023 1301056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
I1002 21:27:10.305625 1301056 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-758263 image ls --format yaml --alsologtostderr:
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b
repoDigests:
- docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc
- docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-758263 image ls --format yaml --alsologtostderr:
I1002 21:27:09.859775 1300984 out.go:360] Setting OutFile to fd 1 ...
I1002 21:27:09.860405 1300984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:27:09.860444 1300984 out.go:374] Setting ErrFile to fd 2...
I1002 21:27:09.860466 1300984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:27:09.860742 1300984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
I1002 21:27:09.861476 1300984 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:27:09.861633 1300984 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:27:09.862214 1300984 cli_runner.go:164] Run: docker container inspect functional-758263 --format={{.State.Status}}
I1002 21:27:09.886408 1300984 ssh_runner.go:195] Run: systemctl --version
I1002 21:27:09.886457 1300984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
I1002 21:27:09.918360 1300984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
I1002 21:27:10.024608 1300984 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-758263 ssh pgrep buildkitd: exit status 1 (345.681583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image build -t localhost/my-image:functional-758263 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-758263 image build -t localhost/my-image:functional-758263 testdata/build --alsologtostderr: (3.340457008s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-758263 image build -t localhost/my-image:functional-758263 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b52d6f7a8fd
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-758263
--> 33088505bec
Successfully tagged localhost/my-image:functional-758263
33088505bec98a0b763bd80db80bc03888886fe3a40c3916ee0cee0ea9924773
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-758263 image build -t localhost/my-image:functional-758263 testdata/build --alsologtostderr:
I1002 21:27:10.095230 1301042 out.go:360] Setting OutFile to fd 1 ...
I1002 21:27:10.096173 1301042 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:27:10.096198 1301042 out.go:374] Setting ErrFile to fd 2...
I1002 21:27:10.096205 1301042 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:27:10.096545 1301042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
I1002 21:27:10.097350 1301042 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:27:10.098173 1301042 config.go:182] Loaded profile config "functional-758263": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:27:10.098815 1301042 cli_runner.go:164] Run: docker container inspect functional-758263 --format={{.State.Status}}
I1002 21:27:10.119977 1301042 ssh_runner.go:195] Run: systemctl --version
I1002 21:27:10.120035 1301042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-758263
I1002 21:27:10.143804 1301042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34281 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/functional-758263/id_rsa Username:docker}
I1002 21:27:10.242216 1301042 build_images.go:161] Building image from path: /tmp/build.1124624103.tar
I1002 21:27:10.242343 1301042 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 21:27:10.250905 1301042 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1124624103.tar
I1002 21:27:10.256499 1301042 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1124624103.tar: stat -c "%s %y" /var/lib/minikube/build/build.1124624103.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1124624103.tar': No such file or directory
I1002 21:27:10.256529 1301042 ssh_runner.go:362] scp /tmp/build.1124624103.tar --> /var/lib/minikube/build/build.1124624103.tar (3072 bytes)
I1002 21:27:10.276658 1301042 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1124624103
I1002 21:27:10.284843 1301042 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1124624103 -xf /var/lib/minikube/build/build.1124624103.tar
I1002 21:27:10.293224 1301042 crio.go:315] Building image: /var/lib/minikube/build/build.1124624103
I1002 21:27:10.293288 1301042 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-758263 /var/lib/minikube/build/build.1124624103 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1002 21:27:13.327365 1301042 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-758263 /var/lib/minikube/build/build.1124624103 --cgroup-manager=cgroupfs: (3.034055755s)
I1002 21:27:13.327434 1301042 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1124624103
I1002 21:27:13.336064 1301042 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1124624103.tar
I1002 21:27:13.344288 1301042 build_images.go:217] Built localhost/my-image:functional-758263 from /tmp/build.1124624103.tar
I1002 21:27:13.344318 1301042 build_images.go:133] succeeded building to: functional-758263
I1002 21:27:13.344324 1301042 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-758263
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image rm kicbase/echo-server:functional-758263 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 image ls
2025/10/02 21:27:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-758263 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-758263
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-758263
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-758263
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 21:29:14.582832 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:30:37.655136 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m23.586221456s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (204.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 kubectl -- rollout status deployment/busybox: (5.514630105s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-9d6zk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-hgbsm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-k9vrk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-9d6zk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-hgbsm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-k9vrk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-9d6zk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-hgbsm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-k9vrk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-9d6zk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-9d6zk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-hgbsm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-hgbsm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-k9vrk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 kubectl -- exec busybox-7b57f96db7-k9vrk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 node add --alsologtostderr -v 5
E1002 21:31:25.853808 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:25.860333 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:25.871742 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:25.893212 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:25.934607 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:26.016101 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:26.177583 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:26.499718 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:27.141663 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:28.423089 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:30.985380 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:36.107612 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:31:46.349040 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 node add --alsologtostderr -v 5: (1m0.016010238s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5: (1.077384653s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-830452 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.084719414s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 status --output json --alsologtostderr -v 5: (1.064693751s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp testdata/cp-test.txt ha-830452:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1172071073/001/cp-test_ha-830452.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452:/home/docker/cp-test.txt ha-830452-m02:/home/docker/cp-test_ha-830452_ha-830452-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m02 "sudo cat /home/docker/cp-test_ha-830452_ha-830452-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452:/home/docker/cp-test.txt ha-830452-m03:/home/docker/cp-test_ha-830452_ha-830452-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m03 "sudo cat /home/docker/cp-test_ha-830452_ha-830452-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452:/home/docker/cp-test.txt ha-830452-m04:/home/docker/cp-test_ha-830452_ha-830452-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m04 "sudo cat /home/docker/cp-test_ha-830452_ha-830452-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp testdata/cp-test.txt ha-830452-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1172071073/001/cp-test_ha-830452-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m02:/home/docker/cp-test.txt ha-830452:/home/docker/cp-test_ha-830452-m02_ha-830452.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452 "sudo cat /home/docker/cp-test_ha-830452-m02_ha-830452.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m02:/home/docker/cp-test.txt ha-830452-m03:/home/docker/cp-test_ha-830452-m02_ha-830452-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m03 "sudo cat /home/docker/cp-test_ha-830452-m02_ha-830452-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m02:/home/docker/cp-test.txt ha-830452-m04:/home/docker/cp-test_ha-830452-m02_ha-830452-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m04 "sudo cat /home/docker/cp-test_ha-830452-m02_ha-830452-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp testdata/cp-test.txt ha-830452-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1172071073/001/cp-test_ha-830452-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m03:/home/docker/cp-test.txt ha-830452:/home/docker/cp-test_ha-830452-m03_ha-830452.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452 "sudo cat /home/docker/cp-test_ha-830452-m03_ha-830452.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m03:/home/docker/cp-test.txt ha-830452-m02:/home/docker/cp-test_ha-830452-m03_ha-830452-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m02 "sudo cat /home/docker/cp-test_ha-830452-m03_ha-830452-m02.txt"
E1002 21:32:06.830526 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m03:/home/docker/cp-test.txt ha-830452-m04:/home/docker/cp-test_ha-830452-m03_ha-830452-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m04 "sudo cat /home/docker/cp-test_ha-830452-m03_ha-830452-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp testdata/cp-test.txt ha-830452-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1172071073/001/cp-test_ha-830452-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m04:/home/docker/cp-test.txt ha-830452:/home/docker/cp-test_ha-830452-m04_ha-830452.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452 "sudo cat /home/docker/cp-test_ha-830452-m04_ha-830452.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m04:/home/docker/cp-test.txt ha-830452-m02:/home/docker/cp-test_ha-830452-m04_ha-830452-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m02 "sudo cat /home/docker/cp-test_ha-830452-m04_ha-830452-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 cp ha-830452-m04:/home/docker/cp-test.txt ha-830452-m03:/home/docker/cp-test_ha-830452-m04_ha-830452-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 ssh -n ha-830452-m03 "sudo cat /home/docker/cp-test_ha-830452-m04_ha-830452-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 node stop m02 --alsologtostderr -v 5: (11.946131161s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5: exit status 7 (755.015327ms)

                                                
                                                
-- stdout --
	ha-830452
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-830452-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-830452-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-830452-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:32:24.854079 1315868 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:32:24.854342 1315868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:32:24.854384 1315868 out.go:374] Setting ErrFile to fd 2...
	I1002 21:32:24.854517 1315868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:32:24.854853 1315868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:32:24.855147 1315868 out.go:368] Setting JSON to false
	I1002 21:32:24.855208 1315868 mustload.go:65] Loading cluster: ha-830452
	I1002 21:32:24.855322 1315868 notify.go:220] Checking for updates...
	I1002 21:32:24.855708 1315868 config.go:182] Loaded profile config "ha-830452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:32:24.855743 1315868 status.go:174] checking status of ha-830452 ...
	I1002 21:32:24.856338 1315868 cli_runner.go:164] Run: docker container inspect ha-830452 --format={{.State.Status}}
	I1002 21:32:24.877968 1315868 status.go:371] ha-830452 host status = "Running" (err=<nil>)
	I1002 21:32:24.878137 1315868 host.go:66] Checking if "ha-830452" exists ...
	I1002 21:32:24.878448 1315868 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-830452
	I1002 21:32:24.903622 1315868 host.go:66] Checking if "ha-830452" exists ...
	I1002 21:32:24.903979 1315868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:32:24.904056 1315868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-830452
	I1002 21:32:24.924356 1315868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34286 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/ha-830452/id_rsa Username:docker}
	I1002 21:32:25.019990 1315868 ssh_runner.go:195] Run: systemctl --version
	I1002 21:32:25.027071 1315868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:32:25.045480 1315868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:32:25.119545 1315868 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-02 21:32:25.108155928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:32:25.120253 1315868 kubeconfig.go:125] found "ha-830452" server: "https://192.168.49.254:8443"
	I1002 21:32:25.120295 1315868 api_server.go:166] Checking apiserver status ...
	I1002 21:32:25.120344 1315868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:32:25.133049 1315868 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1268/cgroup
	I1002 21:32:25.142163 1315868 api_server.go:182] apiserver freezer: "5:freezer:/docker/1e0b6156521e3195a3cb99d19385ca0bc2d6b94ce3834a4d0b90ce9a58ad1dfa/crio/crio-66c504f4fd3708b6676f1501e341231343ac9b99d6ae9a813b6cf79251f21dba"
	I1002 21:32:25.142253 1315868 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1e0b6156521e3195a3cb99d19385ca0bc2d6b94ce3834a4d0b90ce9a58ad1dfa/crio/crio-66c504f4fd3708b6676f1501e341231343ac9b99d6ae9a813b6cf79251f21dba/freezer.state
	I1002 21:32:25.150482 1315868 api_server.go:204] freezer state: "THAWED"
	I1002 21:32:25.150511 1315868 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 21:32:25.159156 1315868 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 21:32:25.159186 1315868 status.go:463] ha-830452 apiserver status = Running (err=<nil>)
	I1002 21:32:25.159197 1315868 status.go:176] ha-830452 status: &{Name:ha-830452 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:32:25.159238 1315868 status.go:174] checking status of ha-830452-m02 ...
	I1002 21:32:25.159573 1315868 cli_runner.go:164] Run: docker container inspect ha-830452-m02 --format={{.State.Status}}
	I1002 21:32:25.177350 1315868 status.go:371] ha-830452-m02 host status = "Stopped" (err=<nil>)
	I1002 21:32:25.177376 1315868 status.go:384] host is not running, skipping remaining checks
	I1002 21:32:25.177383 1315868 status.go:176] ha-830452-m02 status: &{Name:ha-830452-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:32:25.177404 1315868 status.go:174] checking status of ha-830452-m03 ...
	I1002 21:32:25.177782 1315868 cli_runner.go:164] Run: docker container inspect ha-830452-m03 --format={{.State.Status}}
	I1002 21:32:25.195722 1315868 status.go:371] ha-830452-m03 host status = "Running" (err=<nil>)
	I1002 21:32:25.195745 1315868 host.go:66] Checking if "ha-830452-m03" exists ...
	I1002 21:32:25.196074 1315868 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-830452-m03
	I1002 21:32:25.215293 1315868 host.go:66] Checking if "ha-830452-m03" exists ...
	I1002 21:32:25.215633 1315868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:32:25.215686 1315868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-830452-m03
	I1002 21:32:25.233869 1315868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34296 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/ha-830452-m03/id_rsa Username:docker}
	I1002 21:32:25.332882 1315868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:32:25.348396 1315868 kubeconfig.go:125] found "ha-830452" server: "https://192.168.49.254:8443"
	I1002 21:32:25.348428 1315868 api_server.go:166] Checking apiserver status ...
	I1002 21:32:25.348470 1315868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:32:25.363955 1315868 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	I1002 21:32:25.372871 1315868 api_server.go:182] apiserver freezer: "5:freezer:/docker/cb3e12af1f5729819b63cf54b1721426964d5a572786d3d07e3d5d6cf9cfe04b/crio/crio-6dbd99591d1f4af2321d7003719601fce68b5145b3f4662c9a940952ea68029e"
	I1002 21:32:25.372976 1315868 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cb3e12af1f5729819b63cf54b1721426964d5a572786d3d07e3d5d6cf9cfe04b/crio/crio-6dbd99591d1f4af2321d7003719601fce68b5145b3f4662c9a940952ea68029e/freezer.state
	I1002 21:32:25.380530 1315868 api_server.go:204] freezer state: "THAWED"
	I1002 21:32:25.380558 1315868 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 21:32:25.389358 1315868 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 21:32:25.389386 1315868 status.go:463] ha-830452-m03 apiserver status = Running (err=<nil>)
	I1002 21:32:25.389401 1315868 status.go:176] ha-830452-m03 status: &{Name:ha-830452-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:32:25.389419 1315868 status.go:174] checking status of ha-830452-m04 ...
	I1002 21:32:25.389727 1315868 cli_runner.go:164] Run: docker container inspect ha-830452-m04 --format={{.State.Status}}
	I1002 21:32:25.406858 1315868 status.go:371] ha-830452-m04 host status = "Running" (err=<nil>)
	I1002 21:32:25.406882 1315868 host.go:66] Checking if "ha-830452-m04" exists ...
	I1002 21:32:25.407183 1315868 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-830452-m04
	I1002 21:32:25.424228 1315868 host.go:66] Checking if "ha-830452-m04" exists ...
	I1002 21:32:25.424557 1315868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:32:25.424609 1315868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-830452-m04
	I1002 21:32:25.443959 1315868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34301 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/ha-830452-m04/id_rsa Username:docker}
	I1002 21:32:25.541629 1315868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:32:25.554852 1315868 status.go:176] ha-830452-m04 status: &{Name:ha-830452-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 node start m02 --alsologtostderr -v 5: (19.606130654s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5: (1.063089371s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1002 21:32:47.791846 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.01754739s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (115.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 stop --alsologtostderr -v 5: (26.571075541s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 start --wait true --alsologtostderr -v 5
E1002 21:34:09.713702 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:34:14.582229 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 start --wait true --alsologtostderr -v 5: (1m29.238628161s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (115.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 node delete m03 --alsologtostderr -v 5: (9.856146428s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 stop --alsologtostderr -v 5: (35.519546543s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5: exit status 7 (117.619492ms)

                                                
                                                
-- stdout --
	ha-830452
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-830452-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-830452-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:35:31.315718 1327140 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:35:31.315948 1327140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:35:31.315981 1327140 out.go:374] Setting ErrFile to fd 2...
	I1002 21:35:31.316000 1327140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:35:31.316287 1327140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:35:31.316503 1327140 out.go:368] Setting JSON to false
	I1002 21:35:31.316568 1327140 mustload.go:65] Loading cluster: ha-830452
	I1002 21:35:31.316647 1327140 notify.go:220] Checking for updates...
	I1002 21:35:31.317015 1327140 config.go:182] Loaded profile config "ha-830452": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:35:31.317055 1327140 status.go:174] checking status of ha-830452 ...
	I1002 21:35:31.317888 1327140 cli_runner.go:164] Run: docker container inspect ha-830452 --format={{.State.Status}}
	I1002 21:35:31.337107 1327140 status.go:371] ha-830452 host status = "Stopped" (err=<nil>)
	I1002 21:35:31.337127 1327140 status.go:384] host is not running, skipping remaining checks
	I1002 21:35:31.337134 1327140 status.go:176] ha-830452 status: &{Name:ha-830452 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:35:31.337158 1327140 status.go:174] checking status of ha-830452-m02 ...
	I1002 21:35:31.337468 1327140 cli_runner.go:164] Run: docker container inspect ha-830452-m02 --format={{.State.Status}}
	I1002 21:35:31.358964 1327140 status.go:371] ha-830452-m02 host status = "Stopped" (err=<nil>)
	I1002 21:35:31.358986 1327140 status.go:384] host is not running, skipping remaining checks
	I1002 21:35:31.358993 1327140 status.go:176] ha-830452-m02 status: &{Name:ha-830452-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:35:31.359012 1327140 status.go:174] checking status of ha-830452-m04 ...
	I1002 21:35:31.359321 1327140 cli_runner.go:164] Run: docker container inspect ha-830452-m04 --format={{.State.Status}}
	I1002 21:35:31.377189 1327140 status.go:371] ha-830452-m04 host status = "Stopped" (err=<nil>)
	I1002 21:35:31.377213 1327140 status.go:384] host is not running, skipping remaining checks
	I1002 21:35:31.377219 1327140 status.go:176] ha-830452-m04 status: &{Name:ha-830452-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (72.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 21:36:25.853519 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m11.683856396s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (72.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 node add --control-plane --alsologtostderr -v 5
E1002 21:36:53.555780 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 node add --control-plane --alsologtostderr -v 5: (1m21.131602386s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-830452 status --alsologtostderr -v 5: (1.076124886s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.075723914s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-473817 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1002 21:39:14.587135 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-473817 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.051654254s)
--- PASS: TestJSONOutput/start/Command (79.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-473817 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-473817 --output=json --user=testUser: (5.70599551s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-334253 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-334253 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.693446ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db702dc4-aee0-4c62-b3ec-8ec1db9e8c94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-334253] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6589a4ef-580c-4587-a05b-ef132ebd6137","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21682"}}
	{"specversion":"1.0","id":"f9c1cc61-31f2-4517-9221-5b3cc3575708","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4c294414-2997-4222-b701-cd52cab550b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig"}}
	{"specversion":"1.0","id":"f806519d-a00e-4a5a-841b-9abee3c06319","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube"}}
	{"specversion":"1.0","id":"a3f5601c-245f-4ff2-b4c5-c6d2d4830043","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1af9453e-a6d9-4da3-9052-ed305a3b1c68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e67fca2d-e750-4d56-99d8-920cd5861aff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-334253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-334253
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-240986 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-240986 --network=: (40.950564253s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-240986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-240986
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-240986: (2.111247542s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.09s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-360271 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-360271 --network=bridge: (33.432041185s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-360271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-360271
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-360271: (2.015657967s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.48s)

                                                
                                    
x
+
TestKicExistingNetwork (36.72s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 21:41:07.797960 1272514 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 21:41:07.814176 1272514 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 21:41:07.815043 1272514 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 21:41:07.815090 1272514 cli_runner.go:164] Run: docker network inspect existing-network
W1002 21:41:07.830330 1272514 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 21:41:07.830363 1272514 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 21:41:07.830381 1272514 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 21:41:07.830487 1272514 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 21:41:07.848131 1272514 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bc3b296e1a81 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:22:eb:59:11:9d:81} reservation:<nil>}
I1002 21:41:07.848437 1272514 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ae92f0}
I1002 21:41:07.848462 1272514 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1002 21:41:07.848514 1272514 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 21:41:07.914418 1272514 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-444551 --network=existing-network
E1002 21:41:25.853946 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-444551 --network=existing-network: (34.573312342s)
helpers_test.go:175: Cleaning up "existing-network-444551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-444551
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-444551: (1.992222412s)
I1002 21:41:44.496758 1272514 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.72s)

                                                
                                    
x
+
TestKicCustomSubnet (38.99s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-549505 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-549505 --subnet=192.168.60.0/24: (36.823869644s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-549505 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-549505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-549505
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-549505: (2.146377721s)
--- PASS: TestKicCustomSubnet (38.99s)

                                                
                                    
x
+
TestKicStaticIP (38.35s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-376529 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-376529 --static-ip=192.168.200.200: (36.010344063s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-376529 ip
helpers_test.go:175: Cleaning up "static-ip-376529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-376529
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-376529: (2.180925739s)
--- PASS: TestKicStaticIP (38.35s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-984345 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-984345 --driver=docker  --container-runtime=crio: (32.434217998s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-987086 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-987086 --driver=docker  --container-runtime=crio: (36.35311581s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-984345
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-987086
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-987086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-987086
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-987086: (2.029587932s)
helpers_test.go:175: Cleaning up "first-984345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-984345
E1002 21:44:14.583080 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-984345: (1.935539849s)
--- PASS: TestMinikubeProfile (74.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-881679 --memory=3072 --mount-string /tmp/TestMountStartserial3000095381/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-881679 --memory=3072 --mount-string /tmp/TestMountStartserial3000095381/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.328056318s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-881679 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-883534 --memory=3072 --mount-string /tmp/TestMountStartserial3000095381/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-883534 --memory=3072 --mount-string /tmp/TestMountStartserial3000095381/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.789936321s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-883534 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-881679 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-881679 --alsologtostderr -v=5: (1.642320092s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-883534 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-883534
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-883534: (1.19956314s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-883534
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-883534: (6.654719906s)
--- PASS: TestMountStart/serial/RestartStopped (7.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-883534 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-788731 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1002 21:46:25.853786 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-788731 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m19.402332255s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-788731 -- rollout status deployment/busybox: (3.13811869s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- exec busybox-7b57f96db7-5zcr8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- exec busybox-7b57f96db7-fsgpt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- exec busybox-7b57f96db7-5zcr8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- exec busybox-7b57f96db7-fsgpt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- exec busybox-7b57f96db7-5zcr8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- exec busybox-7b57f96db7-fsgpt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- exec busybox-7b57f96db7-5zcr8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- exec busybox-7b57f96db7-5zcr8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- exec busybox-7b57f96db7-fsgpt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-788731 -- exec busybox-7b57f96db7-fsgpt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-788731 -v=5 --alsologtostderr
E1002 21:47:17.657537 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:47:48.917178 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-788731 -v=5 --alsologtostderr: (58.411880534s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-788731 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp testdata/cp-test.txt multinode-788731:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp multinode-788731:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3027698459/001/cp-test_multinode-788731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp multinode-788731:/home/docker/cp-test.txt multinode-788731-m02:/home/docker/cp-test_multinode-788731_multinode-788731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m02 "sudo cat /home/docker/cp-test_multinode-788731_multinode-788731-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp multinode-788731:/home/docker/cp-test.txt multinode-788731-m03:/home/docker/cp-test_multinode-788731_multinode-788731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m03 "sudo cat /home/docker/cp-test_multinode-788731_multinode-788731-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp testdata/cp-test.txt multinode-788731-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp multinode-788731-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3027698459/001/cp-test_multinode-788731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp multinode-788731-m02:/home/docker/cp-test.txt multinode-788731:/home/docker/cp-test_multinode-788731-m02_multinode-788731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731 "sudo cat /home/docker/cp-test_multinode-788731-m02_multinode-788731.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp multinode-788731-m02:/home/docker/cp-test.txt multinode-788731-m03:/home/docker/cp-test_multinode-788731-m02_multinode-788731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m03 "sudo cat /home/docker/cp-test_multinode-788731-m02_multinode-788731-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp testdata/cp-test.txt multinode-788731-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp multinode-788731-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3027698459/001/cp-test_multinode-788731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp multinode-788731-m03:/home/docker/cp-test.txt multinode-788731:/home/docker/cp-test_multinode-788731-m03_multinode-788731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731 "sudo cat /home/docker/cp-test_multinode-788731-m03_multinode-788731.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 cp multinode-788731-m03:/home/docker/cp-test.txt multinode-788731-m02:/home/docker/cp-test_multinode-788731-m03_multinode-788731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 ssh -n multinode-788731-m02 "sudo cat /home/docker/cp-test_multinode-788731-m03_multinode-788731-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-788731 node stop m03: (1.214252047s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-788731 status: exit status 7 (522.543882ms)

                                                
                                                
-- stdout --
	multinode-788731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-788731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-788731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-788731 status --alsologtostderr: exit status 7 (523.462454ms)

                                                
                                                
-- stdout --
	multinode-788731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-788731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-788731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:48:26.103804 1377483 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:48:26.103971 1377483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:48:26.103993 1377483 out.go:374] Setting ErrFile to fd 2...
	I1002 21:48:26.104013 1377483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:48:26.104444 1377483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:48:26.104699 1377483 out.go:368] Setting JSON to false
	I1002 21:48:26.104746 1377483 mustload.go:65] Loading cluster: multinode-788731
	I1002 21:48:26.105469 1377483 config.go:182] Loaded profile config "multinode-788731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:48:26.105513 1377483 status.go:174] checking status of multinode-788731 ...
	I1002 21:48:26.107208 1377483 cli_runner.go:164] Run: docker container inspect multinode-788731 --format={{.State.Status}}
	I1002 21:48:26.107584 1377483 notify.go:220] Checking for updates...
	I1002 21:48:26.125566 1377483 status.go:371] multinode-788731 host status = "Running" (err=<nil>)
	I1002 21:48:26.125588 1377483 host.go:66] Checking if "multinode-788731" exists ...
	I1002 21:48:26.125888 1377483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-788731
	I1002 21:48:26.144158 1377483 host.go:66] Checking if "multinode-788731" exists ...
	I1002 21:48:26.144468 1377483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:48:26.144519 1377483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-788731
	I1002 21:48:26.164565 1377483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34406 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/multinode-788731/id_rsa Username:docker}
	I1002 21:48:26.260420 1377483 ssh_runner.go:195] Run: systemctl --version
	I1002 21:48:26.267299 1377483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:48:26.280241 1377483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:48:26.349245 1377483 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 21:48:26.339908926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 21:48:26.349877 1377483 kubeconfig.go:125] found "multinode-788731" server: "https://192.168.67.2:8443"
	I1002 21:48:26.349909 1377483 api_server.go:166] Checking apiserver status ...
	I1002 21:48:26.349955 1377483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:48:26.361880 1377483 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1269/cgroup
	I1002 21:48:26.370651 1377483 api_server.go:182] apiserver freezer: "5:freezer:/docker/bea007d8cea53c38a2fec9d1f5d08f36107993cc0a8dfbb5775a6599d201227f/crio/crio-1170cd2851d5e15e41acf9ec30dc08d731c4e87f39f199f830ba1b3804b4dcd5"
	I1002 21:48:26.370728 1377483 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bea007d8cea53c38a2fec9d1f5d08f36107993cc0a8dfbb5775a6599d201227f/crio/crio-1170cd2851d5e15e41acf9ec30dc08d731c4e87f39f199f830ba1b3804b4dcd5/freezer.state
	I1002 21:48:26.378584 1377483 api_server.go:204] freezer state: "THAWED"
	I1002 21:48:26.378612 1377483 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 21:48:26.387852 1377483 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 21:48:26.387880 1377483 status.go:463] multinode-788731 apiserver status = Running (err=<nil>)
	I1002 21:48:26.387891 1377483 status.go:176] multinode-788731 status: &{Name:multinode-788731 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:48:26.387908 1377483 status.go:174] checking status of multinode-788731-m02 ...
	I1002 21:48:26.388222 1377483 cli_runner.go:164] Run: docker container inspect multinode-788731-m02 --format={{.State.Status}}
	I1002 21:48:26.406593 1377483 status.go:371] multinode-788731-m02 host status = "Running" (err=<nil>)
	I1002 21:48:26.406620 1377483 host.go:66] Checking if "multinode-788731-m02" exists ...
	I1002 21:48:26.406925 1377483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-788731-m02
	I1002 21:48:26.426159 1377483 host.go:66] Checking if "multinode-788731-m02" exists ...
	I1002 21:48:26.426483 1377483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:48:26.426532 1377483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-788731-m02
	I1002 21:48:26.444029 1377483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34411 SSHKeyPath:/home/jenkins/minikube-integration/21682-1270657/.minikube/machines/multinode-788731-m02/id_rsa Username:docker}
	I1002 21:48:26.539301 1377483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:48:26.551882 1377483 status.go:176] multinode-788731-m02 status: &{Name:multinode-788731-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:48:26.551965 1377483 status.go:174] checking status of multinode-788731-m03 ...
	I1002 21:48:26.552358 1377483 cli_runner.go:164] Run: docker container inspect multinode-788731-m03 --format={{.State.Status}}
	I1002 21:48:26.569309 1377483 status.go:371] multinode-788731-m03 host status = "Stopped" (err=<nil>)
	I1002 21:48:26.569329 1377483 status.go:384] host is not running, skipping remaining checks
	I1002 21:48:26.569336 1377483 status.go:176] multinode-788731-m03 status: &{Name:multinode-788731-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-788731 node start m03 -v=5 --alsologtostderr: (7.094373436s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-788731
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-788731
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-788731: (24.752936512s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-788731 --wait=true -v=5 --alsologtostderr
E1002 21:49:14.582355 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-788731 --wait=true -v=5 --alsologtostderr: (48.499199584s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-788731
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-788731 node delete m03: (4.799109533s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-788731 stop: (23.579289986s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-788731 status: exit status 7 (88.197655ms)

                                                
                                                
-- stdout --
	multinode-788731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-788731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-788731 status --alsologtostderr: exit status 7 (91.751041ms)

                                                
                                                
-- stdout --
	multinode-788731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-788731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:50:17.037467 1385226 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:50:17.037643 1385226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:50:17.037673 1385226 out.go:374] Setting ErrFile to fd 2...
	I1002 21:50:17.037693 1385226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:50:17.037964 1385226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 21:50:17.038258 1385226 out.go:368] Setting JSON to false
	I1002 21:50:17.038334 1385226 mustload.go:65] Loading cluster: multinode-788731
	I1002 21:50:17.038406 1385226 notify.go:220] Checking for updates...
	I1002 21:50:17.039386 1385226 config.go:182] Loaded profile config "multinode-788731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:50:17.039429 1385226 status.go:174] checking status of multinode-788731 ...
	I1002 21:50:17.039987 1385226 cli_runner.go:164] Run: docker container inspect multinode-788731 --format={{.State.Status}}
	I1002 21:50:17.059538 1385226 status.go:371] multinode-788731 host status = "Stopped" (err=<nil>)
	I1002 21:50:17.059558 1385226 status.go:384] host is not running, skipping remaining checks
	I1002 21:50:17.059578 1385226 status.go:176] multinode-788731 status: &{Name:multinode-788731 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:50:17.059674 1385226 status.go:174] checking status of multinode-788731-m02 ...
	I1002 21:50:17.060051 1385226 cli_runner.go:164] Run: docker container inspect multinode-788731-m02 --format={{.State.Status}}
	I1002 21:50:17.080038 1385226 status.go:371] multinode-788731-m02 host status = "Stopped" (err=<nil>)
	I1002 21:50:17.080058 1385226 status.go:384] host is not running, skipping remaining checks
	I1002 21:50:17.080071 1385226 status.go:176] multinode-788731-m02 status: &{Name:multinode-788731-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-788731 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-788731 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (57.137190373s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-788731 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-788731
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-788731-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-788731-m02 --driver=docker  --container-runtime=crio: exit status 14 (145.838662ms)

                                                
                                                
-- stdout --
	* [multinode-788731-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-788731-m02' is duplicated with machine name 'multinode-788731-m02' in profile 'multinode-788731'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-788731-m03 --driver=docker  --container-runtime=crio
E1002 21:51:25.853788 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-788731-m03 --driver=docker  --container-runtime=crio: (34.717821524s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-788731
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-788731: exit status 80 (327.520019ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-788731 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-788731-m03 already exists in multinode-788731-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-788731-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-788731-m03: (1.960583797s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.21s)

                                                
                                    
x
+
TestPreload (155.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-287621 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-287621 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m0.287090687s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-287621 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-287621 image pull gcr.io/k8s-minikube/busybox: (2.092565895s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-287621
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-287621: (5.775739483s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-287621 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1002 21:54:14.582424 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-287621 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m24.937416351s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-287621 image list
helpers_test.go:175: Cleaning up "test-preload-287621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-287621
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-287621: (2.332115423s)
--- PASS: TestPreload (155.65s)

                                                
                                    
x
+
TestScheduledStopUnix (108.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-319521 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-319521 --memory=3072 --driver=docker  --container-runtime=crio: (32.282652402s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-319521 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-319521 -n scheduled-stop-319521
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-319521 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 21:55:04.651495 1272514 retry.go:31] will retry after 96.883µs: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.651632 1272514 retry.go:31] will retry after 145.959µs: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.651969 1272514 retry.go:31] will retry after 203.611µs: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.652224 1272514 retry.go:31] will retry after 377.096µs: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.653675 1272514 retry.go:31] will retry after 526.933µs: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.654810 1272514 retry.go:31] will retry after 530.339µs: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.655884 1272514 retry.go:31] will retry after 641.556µs: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.657010 1272514 retry.go:31] will retry after 1.683863ms: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.659813 1272514 retry.go:31] will retry after 1.290344ms: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.662083 1272514 retry.go:31] will retry after 5.249115ms: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.667478 1272514 retry.go:31] will retry after 3.053827ms: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.670653 1272514 retry.go:31] will retry after 12.482781ms: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.683882 1272514 retry.go:31] will retry after 15.805342ms: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.700116 1272514 retry.go:31] will retry after 22.820103ms: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
I1002 21:55:04.723366 1272514 retry.go:31] will retry after 43.37844ms: open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/scheduled-stop-319521/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-319521 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-319521 -n scheduled-stop-319521
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-319521
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-319521 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-319521
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-319521: exit status 7 (64.508741ms)

                                                
                                                
-- stdout --
	scheduled-stop-319521
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-319521 -n scheduled-stop-319521
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-319521 -n scheduled-stop-319521: exit status 7 (70.433971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-319521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-319521
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-319521: (4.346263153s)
--- PASS: TestScheduledStopUnix (108.28s)

                                                
                                    
x
+
TestInsufficientStorage (13.15s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-376892 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1002 21:56:25.853170 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-376892 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.68522698s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a183fc28-9e6b-4d2b-a091-36b0672880e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-376892] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cda0f7e1-4896-4b13-b0b5-4a1007512b16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21682"}}
	{"specversion":"1.0","id":"d6c78565-ccb0-4e22-9a07-fd90c33a7411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"04781b9d-3afc-40f1-b5db-307006dc4553","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig"}}
	{"specversion":"1.0","id":"f1388cab-84b6-460c-98a5-4f6520a95e7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube"}}
	{"specversion":"1.0","id":"3d4ba857-d92c-428e-9bff-730c6dd96a8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0bd0666f-4452-46b7-8477-6f7a8dfcb76f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c88a08f9-a11b-4831-ba85-931fe171f21e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"df454638-b633-47e8-a0a1-46917721a8cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1853eea1-80e3-428f-a259-b754c0e8388d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"514e1bcc-ab02-449d-8d2c-0213733f8018","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5a0c9200-88f2-4385-8ff3-7140ed7508b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-376892\" primary control-plane node in \"insufficient-storage-376892\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c0a37d6-eeab-454e-ab27-6e660b5476be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"26290847-396c-46be-ad30-9f366fe5a74d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f8d88c3-d64b-40f5-8f48-ec97e6e6d6c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-376892 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-376892 --output=json --layout=cluster: exit status 7 (305.40194ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-376892","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-376892","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:56:31.084961 1401302 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-376892" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-376892 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-376892 --output=json --layout=cluster: exit status 7 (284.847467ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-376892","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-376892","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:56:31.373028 1401371 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-376892" does not appear in /home/jenkins/minikube-integration/21682-1270657/kubeconfig
	E1002 21:56:31.382789 1401371 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/insufficient-storage-376892/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-376892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-376892
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-376892: (1.873073815s)
--- PASS: TestInsufficientStorage (13.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2696465495 start -p running-upgrade-578747 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2696465495 start -p running-upgrade-578747 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.164504698s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-578747 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-578747 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.106765321s)
helpers_test.go:175: Cleaning up "running-upgrade-578747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-578747
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-578747: (1.946836679s)
--- PASS: TestRunningBinaryUpgrade (53.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (358.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.08806644s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-186867
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-186867: (1.268991949s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-186867 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-186867 status --format={{.Host}}: exit status 7 (93.148294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m34.65742652s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-186867 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (104.806372ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-186867] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-186867
	    minikube start -p kubernetes-upgrade-186867 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1868672 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-186867 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1002 22:03:57.659460 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:04:14.582349 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-186867 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.714174764s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-186867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-186867
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-186867: (2.054566988s)
--- PASS: TestKubernetesUpgrade (358.08s)

                                                
                                    
x
+
TestMissingContainerUpgrade (110s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1973146101 start -p missing-upgrade-385082 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1973146101 start -p missing-upgrade-385082 --memory=3072 --driver=docker  --container-runtime=crio: (1m0.79641718s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-385082
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-385082
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-385082 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-385082 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.266098264s)
helpers_test.go:175: Cleaning up "missing-upgrade-385082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-385082
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-385082: (2.071953386s)
--- PASS: TestMissingContainerUpgrade (110.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-732300 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-732300 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (94.643326ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-732300] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (55.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-732300 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-732300 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (55.4375317s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-732300 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (55.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-732300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-732300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.01373921s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-732300 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-732300 status -o json: exit status 2 (373.782741ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-732300","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-732300
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-732300: (2.101184691s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-732300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-732300 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.767629594s)
--- PASS: TestNoKubernetes/serial/Start (9.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-732300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-732300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (442.501591ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-732300
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-732300: (1.294883628s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-732300 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-732300 --driver=docker  --container-runtime=crio: (9.753244733s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-732300 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-732300 "sudo systemctl is-active --quiet service kubelet": exit status 1 (470.091985ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (65.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.78743877 start -p stopped-upgrade-679793 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.78743877 start -p stopped-upgrade-679793 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.002375715s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.78743877 -p stopped-upgrade-679793 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.78743877 -p stopped-upgrade-679793 stop: (1.246295591s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-679793 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1002 21:59:14.582777 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-679793 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.169398445s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (65.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-679793
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-679793: (1.221527614s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                    
x
+
TestPause/serial/Start (82.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-449722 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1002 22:01:25.853902 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-449722 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.492991928s)
--- PASS: TestPause/serial/Start (82.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (17.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-449722 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-449722 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.421059471s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (17.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-198170 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-198170 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (171.73473ms)

                                                
                                                
-- stdout --
	* [false-198170] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:04:24.844150 1439743 out.go:360] Setting OutFile to fd 1 ...
	I1002 22:04:24.844274 1439743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:04:24.844285 1439743 out.go:374] Setting ErrFile to fd 2...
	I1002 22:04:24.844291 1439743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 22:04:24.844562 1439743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-1270657/.minikube/bin
	I1002 22:04:24.845025 1439743 out.go:368] Setting JSON to false
	I1002 22:04:24.846274 1439743 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24390,"bootTime":1759418275,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1002 22:04:24.846347 1439743 start.go:140] virtualization:  
	I1002 22:04:24.850014 1439743 out.go:179] * [false-198170] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 22:04:24.853799 1439743 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 22:04:24.853883 1439743 notify.go:220] Checking for updates...
	I1002 22:04:24.859567 1439743 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:04:24.862640 1439743 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-1270657/kubeconfig
	I1002 22:04:24.865637 1439743 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-1270657/.minikube
	I1002 22:04:24.868508 1439743 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:04:24.871473 1439743 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:04:24.874977 1439743 config.go:182] Loaded profile config "force-systemd-flag-292135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 22:04:24.875093 1439743 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 22:04:24.897152 1439743 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 22:04:24.897284 1439743 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:04:24.952734 1439743 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 22:04:24.943364602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 22:04:24.952849 1439743 docker.go:318] overlay module found
	I1002 22:04:24.955910 1439743 out.go:179] * Using the docker driver based on user configuration
	I1002 22:04:24.958736 1439743 start.go:304] selected driver: docker
	I1002 22:04:24.958762 1439743 start.go:924] validating driver "docker" against <nil>
	I1002 22:04:24.958776 1439743 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:04:24.962288 1439743 out.go:203] 
	W1002 22:04:24.965247 1439743 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 22:04:24.968058 1439743 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-198170 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-198170" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-198170

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-198170"

                                                
                                                
----------------------- debugLogs end: false-198170 [took: 3.290179025s] --------------------------------
helpers_test.go:175: Cleaning up "false-198170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-198170
--- PASS: TestNetworkPlugins/group/false (3.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1002 22:14:14.582308 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.108943223s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-173127 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [18594e75-9c38-49b6-9ed4-84dddfb3c1a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [18594e75-9c38-49b6-9ed4-84dddfb3c1a2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004328981s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-173127 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-173127 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-173127 --alsologtostderr -v=3: (11.929209979s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.963516544s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-173127 -n old-k8s-version-173127
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-173127 -n old-k8s-version-173127: exit status 7 (73.744343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-173127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (60.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-173127 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (59.927990194s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-173127 -n old-k8s-version-173127
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (60.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vlglj" [02d32051-a965-4fa4-9a6e-e03d13faab7d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004079822s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vlglj" [02d32051-a965-4fa4-9a6e-e03d13faab7d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003863277s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-173127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-173127 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-230628 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d5cdadc6-b07a-446b-881e-e2297b0df1af] Pending
helpers_test.go:352: "busybox" [d5cdadc6-b07a-446b-881e-e2297b0df1af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d5cdadc6-b07a-446b-881e-e2297b0df1af] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003377782s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-230628 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m29.023355909s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-230628 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-230628 --alsologtostderr -v=3: (11.975125782s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628: exit status 7 (105.590475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-230628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-230628 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (58.464177181s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-230628 -n default-k8s-diff-port-230628
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p8jr6" [2d975585-c314-4472-9fa5-df17f655ee8f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002935561s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p8jr6" [2d975585-c314-4472-9fa5-df17f655ee8f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0040057s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-230628 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-080134 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cdae129d-1c93-4cfd-96d9-cff208fdaf10] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cdae129d-1c93-4cfd-96d9-cff208fdaf10] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003538932s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-080134 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-230628 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m9.969446013s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-080134 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-080134 --alsologtostderr -v=3: (13.002795021s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-080134 -n embed-certs-080134
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-080134 -n embed-certs-080134: exit status 7 (114.859082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-080134 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1002 22:19:14.582235 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-080134 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.210637592s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-080134 -n embed-certs-080134
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9jzrx" [aef97e4d-9f32-404b-9bac-6f18e92b149a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002961806s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-975002 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [812396e0-ab4d-4b5a-9a04-769a24c6ecc1] Pending
helpers_test.go:352: "busybox" [812396e0-ab4d-4b5a-9a04-769a24c6ecc1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [812396e0-ab4d-4b5a-9a04-769a24c6ecc1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004775377s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-975002 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9jzrx" [aef97e4d-9f32-404b-9bac-6f18e92b149a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005274727s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-080134 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-080134 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-975002 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-975002 --alsologtostderr -v=3: (13.526901837s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1002 22:19:45.923087 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:19:45.929441 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:19:45.940824 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:19:45.962219 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:19:46.003960 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:19:46.085374 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:19:46.246975 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:19:46.568657 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:19:47.210668 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:19:48.492730 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.281305231s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-975002 -n no-preload-975002
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-975002 -n no-preload-975002: exit status 7 (135.454131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-975002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1002 22:19:51.054328 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:19:56.176388 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:20:06.417858 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:20:26.899948 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-975002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (58.523555624s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-975002 -n no-preload-975002
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (59.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-007061 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-007061 --alsologtostderr -v=3: (1.354789128s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-007061 -n newest-cni-007061
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-007061 -n newest-cni-007061: exit status 7 (76.875074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-007061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1002 22:20:37.661558 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-007061 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.777986414s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-007061 -n newest-cni-007061
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ns2nz" [f576c598-470a-4938-8bd6-5b9d6e37f13f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003436059s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-007061 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ns2nz" [f576c598-470a-4938-8bd6-5b9d6e37f13f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003430441s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-975002 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-975002 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m33.964899289s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1002 22:21:25.854089 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:27.841826 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:27.848139 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:27.859425 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:27.880739 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:27.922081 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:28.003447 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:28.164702 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:28.486705 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:29.128634 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:30.409945 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:32.971234 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:38.092534 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:21:48.334130 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:22:08.815444 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:22:29.783236 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m27.904995364s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-198170 "pgrep -a kubelet"
I1002 22:22:33.811036 1272514 config.go:182] Loaded profile config "auto-198170": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-198170 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nghqp" [435781a5-fbc9-4a55-89f7-1e65052b1565] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nghqp" [435781a5-fbc9-4a55-89f7-1e65052b1565] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006999902s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-7cqlw" [01ab230a-036f-405c-8c85-330bc857a008] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.012630166s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-198170 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-198170 "pgrep -a kubelet"
I1002 22:22:45.328080 1272514 config.go:182] Loaded profile config "kindnet-198170": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-198170 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lm54l" [bcb939ab-ffc4-4700-b4b4-183f4847006a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lm54l" [bcb939ab-ffc4-4700-b4b4-183f4847006a] Running
E1002 22:22:49.777436 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004615023s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-198170 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m6.718057548s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1002 22:24:11.698816 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m12.507438102s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-jbt6f" [c94a1b7a-0363-43de-b911-f3db8a9095bc] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1002 22:24:14.583080 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/addons-806706/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-jbt6f" [c94a1b7a-0363-43de-b911-f3db8a9095bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005820024s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-198170 "pgrep -a kubelet"
I1002 22:24:19.703547 1272514 config.go:182] Loaded profile config "calico-198170": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-198170 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p2tv9" [62d3ab81-e3db-4bc1-a89d-4d2ba8443d86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:24:23.671121 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:24:23.677703 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:24:23.689136 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:24:23.710520 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:24:23.751903 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:24:23.833371 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:24:23.995188 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:24:24.317132 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:24:24.959017 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-p2tv9" [62d3ab81-e3db-4bc1-a89d-4d2ba8443d86] Running
E1002 22:24:26.240837 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:24:28.802252 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003595293s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-198170 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-198170 "pgrep -a kubelet"
I1002 22:24:33.052211 1272514 config.go:182] Loaded profile config "custom-flannel-198170": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-198170 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ltkrp" [7bbe59cf-8510-46e4-819d-a8a1d9915f63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:24:33.923848 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-ltkrp" [7bbe59cf-8510-46e4-819d-a8a1d9915f63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003143096s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-198170 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1002 22:25:04.647506 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m29.537380146s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1002 22:25:13.625383 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/old-k8s-version-173127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:25:45.609660 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/no-preload-975002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.372002678s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-mfvlt" [26bd5109-76b3-477c-875f-468f72da185b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.002731596s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-198170 "pgrep -a kubelet"
I1002 22:26:18.923505 1272514 config.go:182] Loaded profile config "flannel-198170": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-198170 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2mgv7" [e99961cf-63b1-43cd-a67a-5ae0958aa153] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2mgv7" [e99961cf-63b1-43cd-a67a-5ae0958aa153] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003365134s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-198170 "pgrep -a kubelet"
I1002 22:26:24.477677 1272514 config.go:182] Loaded profile config "enable-default-cni-198170": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-198170 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7vclg" [ef3ad384-8b90-4d09-9520-3b82365c76ec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:26:25.853385 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:26:27.842051 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-7vclg" [ef3ad384-8b90-4d09-9520-3b82365c76ec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004038971s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-198170 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-198170 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1002 22:26:55.540386 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/default-k8s-diff-port-230628/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-198170 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m13.001026051s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-198170 "pgrep -a kubelet"
I1002 22:28:08.681539 1272514 config.go:182] Loaded profile config "bridge-198170": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-198170 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wjd7g" [6cde427f-7a81-4b18-b99e-4819be3f40cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wjd7g" [6cde427f-7a81-4b18-b99e-4819be3f40cc] Running
E1002 22:28:15.049403 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/auto-198170/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003419007s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-198170 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-198170 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-121503 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-121503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-121503
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-607037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-607037
--- SKIP: TestStartStop/group/disable-driver-mounts (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-198170 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-198170" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-198170

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-198170"

                                                
                                                
----------------------- debugLogs end: kubenet-198170 [took: 3.300674748s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-198170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-198170
--- SKIP: TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1002 22:04:28.924647 1272514 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-1270657/.minikube/profiles/functional-758263/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:636: 
----------------------- debugLogs start: cilium-198170 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-198170" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-198170

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-198170" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-198170"

                                                
                                                
----------------------- debugLogs end: cilium-198170 [took: 3.82587042s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-198170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-198170
--- SKIP: TestNetworkPlugins/group/cilium (3.97s)

                                                
                                    
Copied to clipboard